Neuroeconomics of Morality

Image Unavailable
Road signs representing a moral choice
Retrieved April 2, 2014, from
http://glory2godforallthings.com/wp-content/uploads/2011/09/right-way-wrong-way1.jpg

While most studies in neuroeconomics have dealt with game theoretic concepts implicating monetary or resource gain, moral neuroeconomics is a subfield of neuroeconomics which focuses on decisions based around morality.[2] The term morality as referred to in this subfield of neuroeconomics has very little to do with the colloquial term which can vary in definition depending on one’s religious or cultural beliefs. Instead of trying to define what is morally right or wrong, moral neuroeconomics studies how people make decisions that involve harm or benefit to oneself and other individuals. More importantly, the focus lies in extremes where the harm or benefit is defined as life or death all under uncertain circumstances.[2] Similar to how most neuroeconomic concepts are explored, moral neuroeconomics also relies on studies which utilize game theoretic experiments. These game theoretic experiments have involved participants giving an answer to hypothetical moral dilemmas or participating in games such as the ultimatum game (UD) or the dictator game (DG)[1][2].

1.1 Process of Moral Cognition

Moral cognition has been distinguished as two discrete parts by both Borg et al in 2011 as well as FeldmanHall et al in 2013[3][4]. However, FeldmanHall et al describes these two parts as “easy” and “difficult” moral cognition as opposed to Borg et al distinction of these parts as “deliberation” and “verdict”. Commonalities exist in both studies which associate these processes with activation of ventromedial prefrontal cortex (vmPFC) and temporoparietal junction (TPJ)[3][4]. Both studies also share the idea that moral decisions have one part in which individual actions within an event are sorted as a binary of “wrong” or “not wrong” and a second part in which these binaries are integrated.

1.1a Binary Sorting

“Easy” moral cognition as identified by FeldmanHall et al, is the process in which the brain identifies an action in a binary fashion: wrong or not wrong[3]. For example, following a common saying of “stealing is wrong,” identifying the action of stealing as a morally “wrong” action would be the process of “easy” moral cognition.
FeldmanHall et al categorizes this step in moral cognition as an “easy” moral cognition by giving an example of a morally simple situation of knowing to aid a child who has obviously been assaulted[4]. However, most real-life decision making processes involve not only the binary identification of knowing what is accepted as “right” or “wrong”, but involves a complex process in which various actions must be weighed against another.
Brain regions associated with this process are the dorsal anterior cingulate cortex (dACC) and ventromedial prefrontal cortex (vmPFC). In the 2013 FeldmanHall et al study involving 89 subjects, scenario cards of 4 types (difficult moral, easy moral, difficult non-moral, easy non-moral) were presented along with a question card to be answered in a yes/no format. The subjects’ neural activity was measured with fMRI and analysed using an f-test. For EM (Easy Moral) scenarios, vmPFC and dACC showed higher activation than compared to bilateral TPJ on a significance level of P<0.05[4]. The “difficulty” of moral scenario cards were tested by having the rate the difficulty of each scenario on a rating of 0-5 with 0 being the easiest[4].

1.1b Integration

This is the part of the moral cognition in which actions which had previously been sorted as “wrong” or “not wrong” are collected together to be integrated in order to weigh a final decision in a moral scenario. Following the previous example of identifying “stealing” as a morally “wrong” action, what happens if you add further variables into the mix such as stealing food to feed your starving family?
FeldmanHall et al brings out a common moral dilemma as an example: a baby cries within a group of people trying to hide from soldiers in a war[4]. Each action can first be identified as being “wrong” or “not wrong.” Silencing the baby is wrong as the action endangers life. Silencing the baby saves the group in hiding and is not wrong. These two can easily go through the first process as independent actions, but when they are integrated in a real situation, one must be weighed against another.
Bilateral TPJ is the brain area associated with this integrative or “difficult” step in moral cognition[4]. In the FeldmanHall et al study, this was the area of the brain which showed higher activation for DM (difficult moral) scenarios from f-tests of fMRI[4].

1.1c Differences between studies

The 2011 Borg et al study first made this distinction, but the results of the Borg et al study contradict with the FeldmanHall et al study. The 2011 Borg et al study asserts that “deliberation” or weighing of moral choices, which corresponds to the integrative or “difficult” moral cognition described by FeldmanHall et al, activates vmPFC and TPJ while these areas are not activated during the “Verdict” process[3]. However, FelmanHall et al shows a double dissociation between the activation of vmPFC and TPJ as being the basis of “easy” and “difficult” moral cognition[4]. A possible cause for these differences may be from the Borg et al’s broad categorization of the “Verdict” process.

Brain Activity in Easy and Difficult Moral Scenarios
Image Unavailable
Fig.1: F-test of fMRI results during moral scenarios. Key: DNM=Difficult Non-Moral, ENM=Easy Non-Moral, DM=Difficult Moral, EM=Easy Moral.
rTPJ=right TPJ, lTPJ=left TPJ, dACC=dorsal ACC, vmPFC=ventromedial PFC. Activated areas shown in color[4].

Subjects' Difficulty Rating of Moral Scenarios
Image Unavailable
Fig. 2: Subjects’ difficulty rating (with 5 being the most difficult) of “difficult” and “easy” moral and non-moral scenarios[4].

1.2 Parameters of moral cognition

Though it is helpful to identify moral cognition as a two-part process, more is needed in order to perform a quantitative analysis of moral cognition. Parameterizing the moral cognitive process allows a quantitative analysis by applying concepts previously used in the field of economics and neuroeconomics[1]. By doing so, moral cognition can be largely broken down into 3 parameters: Magnitude, Probability, Expected Value[1].

1.2a Magnitude

The magnitude of an event refers to the level of an outcome either bad or good.[5] In an economic context, it could refer to a number of dollars gained or lost from an outcome. However, in moral cognition, the magnitude of a decision is harder to measure, since it has no set currency as with economy. In order to quantify magnitude of outcomes in a moral cognitive setting, magnitude in moral cognitive studies use number of lives affected as the magnitude parameter as with the Shenhav and Greene study[1]. In the said study, associations to parameters such as magnitude were made by varying these parameters and observing changes to fMRI readings. As shown in the figure, varying the magnitude of outcomes (number of lives saved) in the moral dilemma readings, brain activity related to changing outcome magnitudes were tracked onto regions such as posterior cingulate cortex, central insula among others[1].

Brain Regions Associated with Probability and Magnitude
Image Unavailable
Fig. 3: fMRI showing regions activated with varying parameters of magnitude and probability during dilemma readings[1].

1.2b Probability

A probability of an event is the likelihood or degree of certainty that a certain outcome will result. In Shenhav and Greene’s experiment, brain areas associated with probability were identified by varying the risks associated with decisions in the dilemma readings and tracking activity through fMRI[1]. To account for individual differences which may alter results to varying probabilities, activation of R-aIn (right anterior Insula) which has been associated with risk taking behavior, was compared among individuals[1][6]. The differences in R-aIn was correlated to the sensitivity of probability in the dilemma readings[1].

1.2c Expected Value

The expected value of an outcome refers to the net gain or net loss as a consequence of an event or a decision. In the case of the dilemma reading experiment by Shenhav and Greene, expected value was the product of probability and magnitude in a tradeoff where a decision could be made to save an individual with varying rates of success at the varying risk of losing lives of others[1]. Interestingly, the result of expected value analysis from Shenhav and Greene’s dilemma readings showed a non-linear, logarithmic estimation of expected value as opposed to a linear estimation[1]. This is shown in figure 4 where the left panel shows a yellow-dotted line to indicate the “break-even” points, points at which the risk in terms of probability of losing an ‘x’ amount of people is equal to the probability of saving one person. The right panel shows the average rating of subjects who tended to overestimate the associated risks of the expected outcome when the probability of survival was low even with lower number of people at risk and underestimate the associated risks of the expected outcome when the probability of survival was high even though there were more people that could be potentially lost[1]. Using similar methods which identified associations between other parameters and brain regions, expected value was primarily associated with the activation of vmPFC[1].

Expected Moral Value
Image Unavailable
Fig. 4: Expected moral value distribution. Left: projected results of a utilitarian expected value estimation showing break-even point as
a yellow-dotted line. Right: ratings given by subjects in dilemma readings with black-dotted line showing the break-even point[1].
Bibliography
1. Shenhav, A., and Greene, J.D. (2010) Moral Judgments Recruit Domain-General Valuation Mechanisms to Integrate Representations of Probability and Magnitude. Neuron. 67, 667-677.
2. Kvaran, T., and Sanfey, A.G. (2010) Toward an Integrated Neuroscience of Morality: The Contribution of Neuroeconomics to Moral Cognition. Topics in Cognitive Science. 2, 579-595.
3. Borg, J.S., Sinnott-Armstrong, W., Calhoun, V.D., & Kiehl, K.A. (2011) Neural basis of moral verdict and moral deliberation. Social Neuroscience. 6(4), 398-413.
4. FeldmanHall, O., Mobbs, D., & Dalgleish, T. (2013) Deconstructing the brain’s moral network: dissociable functionality between the temporoparietal junction and ventro-medial prefrontal cortex. SCAN Advance Access. 39, 1-10.
5. Luo, Q., and Qu, C. (2013) Comparison Enhances Size Sensitivity: Neural Correlates of Outcome Magnitude Processing. PLoS ONE 8(8): e71186.
6. Kuhnen, C.M., and Knutson, B. (2005). The neural basis of financial risk taking. Neuron 47, 763-770.

Add a New Comment
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License