Chenbo Zhong belives that when we over-emphasize reason and deliberation, we might start to ignore our emotional reactions
Q. You believe that it is more important than ever to understand how people resolve ethical dilemmas. Why now?
A: It’s not because people are more unethical than they were, say 50 years ago; unethical behaviour has always been an issue. But the extent to which societies and people have become interconnected means that the consequences of peoples’ actions are more significant than ever. Modern organizations represent the interests of millions of people and have stakeholders across continents, so the ripple effect of one individual’s behaviour can be far-reaching. That’s why unethical behaviour demands attention from both researchers and the general public, in terms of understanding,
a) why it happens, and b) how we can counteract it.
Q. Current theories in ethical decision making are heavily influenced by Lawrence Kohlberg’s model of moral development. Please summarize its key elements.
Kohlberg’s theory holds that moral reasoning -- the basis for ethical behaviour -- has six identifiable developmental stages, each more adequate at responding to moral dilemmas than its predecessor. The six stages can be grouped into three levels of two stages each: pre-conventional, conventional and post-conventional. I’ll quickly summarize each.
• The pre-conventional level is especially common in children, although adults can also exhibit this level of reasoning. Reasoners at this level judge the morality of an action by its direct consequences. They are solely concerned with the effects of an action on the self in an egocentric manner. For instance, "The last time I did that I got spanked, so I won’t ever do it again." At this stage, individuals lack any recognition that others' points of view are different from their own.
• The conventional level of moral reasoning is typical of adolescents and adults. Those who reason in a conventional way judge the morality of actions by comparing them to society's views and expectations concerning right and wrong. At this level an individual obeys rules and follows society's norms even when there are no consequences for obedience or disobedience.
• The post-conventional level is marked by a growing realization that individuals are separate entities from society and that the individual’s own perspective may take precedence over society’s view. As a result, individuals may disobey rules that are inconsistent with their own principles, which typically include basic human rights such as life, liberty and justice. Some theorists have speculated that many people never reach this level of moral reasoning.
The ethical decision-making research that follows in the Kohlberg tradition paints a picture of decision-making as very linear and cognitive, with decision making broken into discrete steps: first, you recognize the moral dilemma, then you reason based on your moral principles, then you make a moral judgment, and then you act on that judgment. My work challenges the idea that moral reasoning is so systematic and linear, building on NYU Psychologist Jonathan Haidt’s findings. He was the first to challenge rationality-based moral judgment, and his work led me to look more closely at peoples’ intuitive and affective-based moral reactions.
Q. Tell us a bit more about Haidt’s findings.
A: His Moral Foundations Theory considers the way morality varies between cultures and identifies six foundations that underlie morality in all societies and individuals. He names them using pairs of opposites to indicate that they provide continua along which judgments can be measured: care/harm; fairness/cheating; loyalty/betrayal; respect for authority/subversion; sanctity/degradation and liberty/oppression. Haidt found that the more politically liberal or left-wing people are, the more they tend to value care and fairness, and the less they tend to value loyalty, respect for authority and purity. Conversely, the more conservative or right-wing people are, the more they tend to value the latter three elements (authority, sanctity and liberty.) Similar results were found across the political spectrum in other countries.
Haidt showed that oftentimes, moral judgement follows not from the analysis of harm, but from a very instinctive and intuitive reaction of disgust, and as a result, moral judgment is often based on intuition. Based on this line of research, moral judgment and decision-making is not just a systematic process: peoples’ affect, or emotion, plays an important role in these judgments.
Q. Describe how your research expands on this idea.
A: I set out to show that there are two distinct processes involved in moral decision-making: one is very reasoned and systematic, while the other is more intuitive and affective. As indicated, the predominant view is that a reasoned, linear process usually results in better decisions – that if you think deliberatively and carefully about a decision, it will lead to better results. But we are finding that our moral understanding is also affect-based, so that, when we feel a sense of disgust or guilt about something, we should know something is wrong.
I believe that our affective reactions actually play a role in regulating our ethical behaviour. This touches on [University of Southern California Professor] Antonio Damasio’s work on ‘somatic markers’. Prof. Domasio has shown that reactions such as guilt, disgust and happiness play an important role in regulating behaviour. For instance, think for a moment about meeting up with a good friend later today; for most people, just imagining this scenario gives them a warm and fuzzy feeling inside, indicating that they look forward it. Such feelings can serve as ‘anchors’ or indicators of our values. Without them, our ability to assign values to our behaviours decreases.
I wanted to look at the extent to which affective reactions play a role in regulating ethical behaviour. If you think about cheating on your taxes, for instance, will the guilt you feel reduce your likelihood of cheating? Or, think about telling a substantive lie to your mother; if you feel no guilt about the prospect of doing this, what does that say about you? I would argue that our affective reactions to mental simulations play an important role in reducing the likelihood that people will actually cheat and lie.
Q. You did several experiments to study the ‘ethical dangers’ of deliberative decision making. What did you find?
A: My hypothesis was that a greater degree of deliberation might actually have negative ethical consequences. I’m not saying that reasoning about things in a linear fashion is inherently bad. The problem is, when we over-emphasize reason and deliberation and play down the role of emotion, we might start to ignore our emotional reactions. For instance, say a CFO named ‘John’ mis-reports his company’s earnings. He might feel really bad about it, but if he has been taught to ignore his feelings, he can easily find 10,000 reasons why mis-reporting is a good idea – for him and for his company. All he has to do is deliberate for a little while and ignore his feelings, and he won’t have to be bothered by his feelings of guilt.
I looked specifically at situations where people could quite easily behave selfishly by cheating or lying and hurting others’ interests. We began by asking one group questions that would evoke a strong emotional reaction, like, “What do you feel when you think about George W. Bush?” With the other group, we asked them to answer bunch of math questions that involved making calculations. Then we placed all participants into a ‘game’ where they had a chance to lie and cheat. What we found is that the calculation task actually led to more cheating and lying, compared to when people were simply asked how they felt about something. We replicated this finding several times, and regardless of whether the calculation had anything to do with economics or not, it increased the likelihood that people would cheat and lie afterwards. This was the first study to show that there are downsides to a calculative approach.
Q. You believe your findings should prompt us to think more carefully about the social consequences of economic education. Please explain.
A: Because Economics is very much based on rationality, reason and incentives, Economics education biases people towards numbers and calculations – towards wanting things to be easily quantifiable. People educated this way get used to putting numbers to things, and calculating cost and benefits, but the fact is, not everything can be calculated. What is the cost, for instance, of moral self-condemnation? And what is the benefit of virtue?
Economics tends to under-emphasize things that are less tangible, less quantifiable, and that can have impacts on multiple domains. Research shows, for instance, that if you ask people to use a numeric metric to evaluate a product, it actually dampens people’s enjoyment of the product. So, if you sample a French wine and afterwards I ask you to rate it on a scale from one to ten, the mere act of doing this dampens your enjoyment of the wine, because in the process of translating the rich experience of enjoyment into very narrow numbers, something gets lost. Economists themselves have actually recognized this. Robert Frank showed that students trained in Economics tend to be more competitive and often make more selfish choices. It is possible that being reminded of money, a standard quantitative measure of value, automatically activates a calculative mindset that suppresses emotional influence and disinhibits unethical behaviours.
Q. What can organizations do to embrace your findings and improve their decision-making strategies?
A: Economics is undeniably an invaluable stream of thought and research, but I do think it would be good to balance between Economics and Psychology in terms of the types of incentive structures we put in place, so that we don’t diminish the richness of people’s intuition and affective experience.
Bringing this thinking into an organization involves creating a culture that is not afraid of intuitions and emotions. These things can actually help you meet business goals. One example is Johnson & Johnson’s Code of Conduct, which includes images of actual parents and children who use their products to elicit strong reactions from employees. Rather than saying, “These are the rules that you need to follow,” the message is, “Never forget that we are making products for mothers and children and for hospital patients.” The images naturally elicit strong affective reactions that motivate people to do the right thing – including following the Code of Conduct.
Of course, we mustn’t fall into the trap of saying, ‘everyone should do what they feel like doing’. Certainly, people vary to the extent to which they experience moral emotions. A sociopath may not feel any guilt at all – he may actually derive joy from transgressions. The key is to recognize that there is some common ground with respect to human emotions; people are quite similar in terms of how they react to things. And perhaps if we understood this common ground better, we would be able to harvest the benefits of these emotional reactions. My research highlights the urgency for a decision strategy that weighs both reason and intuition.
Chen-Bo Zhong is an assistant professor of Organizational Behaviour and Human Resource Management at the Rotman School of Management. His paper, “The Ethical Dangers of Deliberative Decision Making,” appeared in the journal Administrative Science and can be downloaded at http://asq.sagepub.com . Rotman faculty research is ranked in the top ten worldwide by the Financial Times.
[This article has been reprinted, with permission, from Rotman Management, the magazine of the University of Toronto's Rotman School of Management]