top of page
Search

Moral Impartiality, Third Party Judgments, and George Floyd Riots with Peter DeScioli



Dr. Peter DeScioli is an Associate Professor of Political Science at Stony Brook University, where he teaches courses on Moral Politics and Public Policy. His research concerns how people strategically form friendships, how people understand notions of property and ownership, and the role of moral condemnation in social functioning. Today, we discuss his research on moral condemnation—the function of moral impartiality, third party judgement and punishment. Specifically, we talk about his theory which construes moral judgement as playing a functional role in reducing the costliness of conflicts as they arise.


APA citation: Cazzell, A. R. (Host). (2020, June 9). Moral Impartiality, Third Party Judgment, and George Floyd Riots with Peter DeScioli [Audio Podcast]. Retrieved from https://www.ambercazzell.com/post/msp-ep35-PeterDeScioli


 

Note: This transcript was automatically generated. Please excuse typos and errors.


Peter DeScioli (00:01:16):

I've always been drawn to kind of the big questions of philosophy. So I was a philosophy major as an undergraduate. And at some point, while you're doing that, you start wondering what is going on with this mind that's philosophizing, right? So philosophy naturally kind of turns to psychology. So that's how I got interested in psychology. And then pretty early as an undergrad, I got introduced to evolutionary biology and comparisons to other primates. And it was just very obvious to me that humans are animals that were primates and that comparing to other species, you know, was a good way to understand what's going on with our philosophical minds. So I kind of set off to understand, you know, like the human mind, thinking about it from an evolutionary perspective. And I wasn't interested in morality per se at that time.


Peter DeScioli (00:02:19):

But just more, the idea was why, you know, what's going on with the human mind? Why, why are we philosophizing? Why are we trying to figure out our place in the universe? And so one of the natural things that we think about there is, you know, why we have big brains. So we're thinking about human evolution and we see this rapid brain expansion over the last few million years. And it's pretty natural to ask, you know, what happened presumably somewhere in there is where we became, you know, a philosophical species. So that was kind of the question is where did this, you know, human intelligence come from? And as I looked into evolutionary psychology and started learning about that, I was probably a little surprised at first to find that the main theory of for why humans evolved giant brains is that it's to deal with the social world.


Peter DeScioli (00:03:18):

So that's kind of one of the most difficult things our brain has to deal with is interacting with others. This means cooperating with them arguing with them fighting with them coordinating so trading all of these things that we do with other people are very difficult and complicated for a brain to pull off. And so that's called the social intelligence hypothesis. It goes by a dozen other names the social brain hypothesis the social complexity hypothesis. But they all mean the same thing that it's the social world humans increasing sociality that was the selective force that shaped our big brains. And specifically, most of that research has focused on cooperation. And I know that, you know, a lot of that work and I've had a lot of people talk about cooperation on your show.


Peter DeScioli (00:04:19):

So, so that's what I was thinking about too. Just like everybody else. I was thinking about cooperation, how do we evolve big brains and start cooperating in large groups of unrelated individuals, which is that's how biologists think about it. And and kind of the, one of the more promising theories at the time and still is that punishment at a big place in this and often called third party punishment, which means punishing someone for something they did to someone else and humans do this much more than most other species. And so it kind of stands out and there's some many well known experiments where allowing people to punish each other, you know, promotes cooperation in the lab, also caution that by it kind of depends exactly how you let them punish each other. If you let the people who get punished, punished back then the whole thing falls apart.


Peter DeScioli (00:05:21):

So it's not that then they just start fighting and asleep and nobody no comes out with any money. So so anyway, so that story was probably a little too simple, but that's what many people were thinking about when I was in graduate school around 2005, 2006. So I was thinking about the same thing, thinking about third party punishment. And then as I thought about that, I was just kind of noticing that usually when people do this, there's a moral justification for it. And so that brought me to moralistic punishment as the kind of main form of third party punishment. And that's what brought me to moral psychology. And then one of the things I noticed there was that that kind of stood out to me was this idea that we have in, in our moral judgments that they're supposed to be impartial.


Peter DeScioli (00:06:22):

And this, this, you know, probably seems pretty normal to most people because that's what our intuition sayings, Oh, yeah, of course moral judgment should be impartial. It doesn't depend on who you are, just depends on what you did. But since I was trained with, you know, this evolutionary psychology background and all these theories of cooperation, I knew that those theories say that partiality is like the most important part of it. So you're supposed to help those who help you and not help those who don't help you. So you're supposed to be partial towards your family and towards your friends and that most of the models that the evolution of cooperation depend on that. And as soon as people are impartial or, or indiscriminant cooperation falls apart and cheaters start to thrive. So so immediately this concept just kind of stood out as something odd and, you know, requiring some requiring attention.


Peter DeScioli (00:07:22):

And I remember in an early paper John Haidt also commented on this and that stood out to me. So so, so that got me, so then I started thinking about, you know, how does partiality work in humans? And that made me think about coalitions and alliances. And and that's kind of what set me off into my current thinking on a moral judgment is that it helps solve some of the problems that come out of our coalitional thinking namely that we side with our allies. And then when everyone does that, we end up in a giant fight. So that that's, that's the basic trajectory from starting with philosophy and ending with a moral judgment.


Amber Cazzell (00:08:17):

Really cool. Very cool trajectory there. I love that. So I'm before, and that was such a beautiful transition right into the topic of discussion today. But before jumping in there, I want to ask a little bit it's you started from the philosophical backgrounds and then started wondering about the mind. I'm curious as you found yourself studying morality and moral judgments, if you've thought about the I'm because your work is very much taking a functional role which makes sense, right, from an evolutionary perspective. And I'm also thinking about philosophy and sort of the your thoughts on the idea that, that what is Andy, and almost like the art is distinction, but I want to be careful there because I know that that's often misunderstood, but this idea that morality is really the world of should is fundamentally its own separate realm, at least philosophically, some people think, seem to believe that. So I'm curious your thoughts about the independence of morality apart from the physical world and how that's informed your taking on of this, this functional lens and your own studies


Peter DeScioli (00:09:58):

Yeah, I mean, even, right. So there's that fundamental distinction between, you know, what is, and what ought to be yeah, so it's, it's hard to say it that intersects with a bunch of different issues. So I'm trying to figure out which one to


Amber Cazzell (00:10:20):

Well when you were studying philosophy as an undergrad. I presume that must have been as an undergrad because I believe your doctorate is in psychology, right? Right as an undergrad. And it sounds as though you were actually, you sort of backed into morality that wasn't the original thing, but I'm curious if as an undergrad, you were, what your thoughts were about the, the distinction between shoulds and, you know, like physical material.


Peter DeScioli (00:10:54):

Yeah. So I was a philosophy major and I was mostly interested in questions, like the nature of reality, the difference between, you know, our perceptions and reality and like what we can know about the world and science and evidence and things like that. So more, much more on the inside. I was never very interested in moral philosophy because when I read moral philosophy, it sounded like they were just making things up. And I could, for every story I heard moral philosopher say, I could think of dozens more that would come to the same conclusion or a different conclusion. And just, it just never, it always felt like pretty arbitrary. They're just kind of making up terms and, and and just, you know, say ultimately just saying what they want to be the case with with you know, rational sounding justification sorry.


Peter DeScioli (00:11:46):

It just seemed like they weren't doing anything of interest of, you know, too much interest. So so yeah, so I, wasn't interested in that very much at first. And so only when I started to see it as a strategy that humans are using to interact with each other and, you know, part of that as a part of human social life, did I start to become interested in it? And yeah, I would say my feelings are somewhat similar to that still. Like, I don't think it's a, it's not like a scientific inquiry exactly. But you know, to try to figure out what's right and wrong. But now I view, you know, much more from a psychological perspective. So now it's interesting to watch people have those debates because I have ideas about why they're having that debate and particularly you know, it would be pretty easy if we could just say morality was just a matter of taste.


Peter DeScioli (00:12:44):

It's just, you know, some people like to eat chocolate and some people prefer the Nella, you know, it's just a taste, you know, that that would be pretty easy and very consistent with what we know about the mind and, and things like that. Know, like people want things, they want different things. This causes them to disagree sometimes, and everything seems to fit pretty well. That it's, I, that it's a taste or a preference that seems pretty clear. And if people disagree, then they just have different preferences, but this doesn't fit our moral psychology at all, because our moral psychology specifically says that it's not that it's not just a preference. And so if we were to, you know, continue to hold that we would just be holding a view, that's completely the opposite of what our moral concepts say. And that's not a good theory of morality is just, just deny morality altogether.


Peter DeScioli (00:13:37):

So the concepts say that they're objective. It doesn't mean they really are, but it, but that's an important part of the concept that, that we have to you know, deal with. And the theory I've been working on gives a reason for why this is the case that the function of our moral judgments is to reach agreements. And if we were to each have our own moral judgment, that would completely defeat the purpose of having a moral judgment because how can we agree on something how would it serve the purpose of coming to agreements? I, if we were to just acknowledge that it's, it's, you know, each of our private tastes so, so, so anyway, so so yeah, so I think kind of good, a good theory about this would need to a balanced that you can't just say it's taste because that's just completely at odds with the basic concept itself.


Peter DeScioli (00:14:40):

But then it's also kind of a tall order to to try to say that like that one particular more moral rule is factually correct. That's not going to hold up too well either some rules are going to be more stable than others. So like the, the rules we're most familiar with don't steal, don't kill, those are the most stable. But the, but there's plenty of other rules that we all disagree on. Like whether it's okay to have sex before marriage or whether it's okay to not pursue science, to try to understand the universe as opposed to just deferring to like religious authorities. So those are things that people disagree about and there's yeah, not going to be a factual analysis that they can tell you the answer to that.


Amber Cazzell (00:15:31):

Yeah. Yeah. Okay. Well, let's go ahead and jump into that theory then, because you have developed such a theory. I would love to just discuss this. So I, you know, I read through your papers, I'm kind of familiar, but I don't know that all the listeners will be familiar with your functional, your theory of the functional role of moral judgments. So let's, let's jump into that. And maybe a good place to start is to talk about what your thought process was for the types of strategies that could be used to reduce costly conflicts, although I'm realizing actually let's back up even further. So could you explain from your theories perspective, what, what the role of moral judgments is in the first place? Cause we've been sort of beating around the Bush with it here, but let's just put that out there in sort of an explicit form.


Peter DeScioli (00:16:34):

Yeah. So, so the idea is that so it's kind of rooted in conflict and coalitions or alliances. And so the idea is that just in everyday life conflicts occur all the time. These conflicts could be fist fights. I think that's often what people think of when you say the word fight, but humans are pretty skilled fighters. A skilled fighter means that you don't just come to blows right away because the basic a basic idea in fighting is to try to minimize your costs. You want to win the fight, but you want to minimize the costs. So skilled fighters have many strategies for avoiding the costs. And when I say this, I'm thinking about the animal world because there's every animal fights and some are more skilled at it than others. And the ones that are more skilled are better at reducing the costs of fighting.


Peter DeScioli (00:17:36):

So in some, in some species fighting, looks like slamming into each other and blood spewing everywhere. You can think of elephant seals as a good example, in some species let's take red deer, a fight might look like roaring at each other from a hundred yards away and then one walking away. So the roaring is a good way of, you know, of getting out of the fight without too much damage. And that's the kind of fighting that humans do. Most of the time, you know, in our everyday life, if we're trying to relate to these ideas, most of us aren't us swinging at each other. When we get into an argument or disagree about something, we're doing something more like the red deer were roaring at each other or if you're a moral philosopher, then you're, you know, sending arguments to the against the other person.


Peter DeScioli (00:18:28):

So, so we board each other and then we say, you know, the roars is con considered an honest signal of body size because you can't easily fake a deep sounding roar. You have to have the big body to to to, to show that. And that's kind of their neat trick that they have for settling this dispute without having to come together and slammed their heads against one another. So similarly when Mmm, moral philosophers roar they're going to their arguments are going to signal things like what other people are likely to believe. So if you make a really good argument, then the other person kind of says, you know, most people are probably going to by that person's argument. So they're, you know, quote unquote winning. So most of our fighting takes this kind of a form.


Peter DeScioli (00:19:21):

And, you know, just everyday arguments over who should do the dishes or things like that. We're kind of doing these subtle displays. And and so that's the, that's the sort of everyday interactions where moral judgment enters into it. And so so when you say that someone was yeah, wrong because they were playing there, your neighbor's playing their stereo too loud. So you make a moral judgment about that. You're using a strategy for dealing with these everyday conflicts that come up, and it's only one of many strategies that you have, and it has distinctive characteristics and you know, and pros and cons compared to other strategies.


Peter DeScioli (00:20:16):

Oh yeah. So what is the function of it? So so that, so it's not too easy to figure out, you know, why you would use a moral attachment as opposed to something else, like just asking, you know, politely asking your neighbor to turn it down, rather than saying, it's wrong for you to play music at 2:00 AM. And so so anyway, so it's, it's actually a fairly complicated strategy. So it takes a while to unwind, like why you would use one versus the other, but the, but the theory that I'm working with is that what moral judgment is especially good at is when two other people are having a conflict and there's multiple others who are concerned bystanders who are observing it, or family and friends who might come to know about it and those bystanders need to choose sides in this conflict.


Peter DeScioli (00:21:11):

And so they're going to use moral rules to, to determine which side they'll take. And the benefit of this is that they can choose the same side this way. And this avoids getting into a situation where everyone just supports their own better friends and then both sides escalate in with one coalition going against another which is something that happens in humans all the time. So we, we evolved a one strategy for trying to reduce this sometimes. But basically humans get into a fight, squabble, two individuals, then they both call them their friends. Then it's five against five, then it's 10 against 10. And then everyone suffers the damages of, of fighting. This is how human fights regularly have gone for millions of years. And so in that process, we developed a new tactic which was if they cross a moral rule that we've already laid out and argued about and set clearly for everyone to see then we'll, then I'll, I'll defer in that case insight against my ally, if they're the one that broke the moral rule. So that's the basic strategy.


Amber Cazzell (00:22:29):

Yeah. And could you also talk a little bit about why, when, when choosing sides, people wouldn't just defer to the most powerful or the most, the person with the most resources in a given conflict?


Peter DeScioli (00:22:52):

Sure. Yeah. So that is the the the third major strategy that people use for choosing science, which is to side with the higher status person against the lower status. So humans are hierarchical just like many other mammals. We have hierarchies of power and hierarchies are actually designed also to reduce the cost of conflict. It's, it's a little, a bit of an odd thing to say, because it looks like the higher status person who's kind of pushing around around the lower status. But, and that is true. But but the whole purpose of having dominance hierarchies and having status is, is to avoid coming to blows every single for every single disagreement, because the lower status remembers the higher status, one beat them before. And that's the, that's the origin of dominance hierarchies. So humans form dominance, hierarchies like, like many other mammals but we're, we're actually we also have anti hierarchy strategies.


Peter DeScioli (00:24:02):

And we, we probably do that. Maybe more than any other mammal, a few other mammals have some similar things chimps will gang up to attack an alpha that's becoming too bossy. So, but so humans are, are we're hierarchical, but probably more than hierarchical. We're anti hierarchical. We have both of these strategies in our minds and, and we lean towards the anti hierarchical. And this is this conclusion is especially based on ethnographic research among Hunter gatherer societies showing that they don't tolerate individuals that try to impulse too much use too much power and impose on others. They're regularly knocking down people that are bragging too much and telling others what to do. So, so we seen by, you know, naturally we're anti hierarchical and it takes extra effort to create hierarchies that are not stable.


Peter DeScioli (00:25:08):

So anyways, so that's our, that's kind of, so when a conflict occurs, you know, we could side we could side with the hierarchy. And so this would be like siding with a teacher against a student, right. Cause the teacher's higher status than the student or with the boss against a coworker or with, you know the president against a Senator. So so that, so we, we do those things. And the problem with that is that that can lead to, is that now those individuals know they have reliable support from others. And so they're just going to become, you know, more power hungry. And that is a threat to every person except for the person at the very top of the hierarchy. Because once this person at the top knows they can count on everyone to support them.


Peter DeScioli (00:26:03):

They can just go around taking stuff from everybody, including even just the person just below them. So that means everybody except for the person at the top has some interest in containing this person. And so that's but so that's, that's the basic problem with always supporting the one who's in power. It helps sometimes, and sometimes humans, human groups go that direction in large part because choosing sides is what is called a coordination problem. So the best strategy depends on what others are doing and if everybody else chooses sides based power, and you're the only one that, that doesn't, that just means you're going to be on the minority side of fights. And and you're going to suffer all the costs of, of losing for every conflict. So so that's, so this also kind of nicely explains why, you know, we're generally anti hierarchical, but we can get to an equilibrium of extreme hierarchy you know, something like Nazi, Germany being, you know that kind of obvious example. And this theory explains why that can happen even though it's unnatural. It's just because once everyone else thinks that everyone's going to decide based on hierarchy, that's the equilibrium. And so they're now stuck in that equilibrium.


Amber Cazzell (00:27:29):

So as we're recording this in the middle of the crown of Iris, and there are riots going on because of the recent incident of police brutality in general, it's been a politically divisive time for several years now, I'm once, like, you know, this theory, what seemed to suggest that moral judgments serve the role of creating an imbalance inside taking over conflicts. So, and yet it seems that there are so many examples of conflicts, which are not so lopsided in terms of support on either side. Could you speak a little bit about how your theory makes sense of that?


Peter DeScioli (00:28:26):

Sure. you know, so yeah, so one function of moral judgments is that it can help us detach from alliances when we need to, it doesn't mean we'll always use it that way. We could, we could, we could choose to just align our moral judgments with our alliances and in that case then we'll, especially if we, if we don't do it very skillfully, so that it's kind of obvious that we're, that we're calling it a moral judgment, but really we're just being really we're just supporting our own faction. And once that's clear to everyone, moral judgment will lose its weight entirely because the only purpose of being impartial is a big is if you think the other side is going to be impartial too, if they're not being impartial to, then there's no point in being impartial yourself.


Peter DeScioli (00:29:23):

So then, then basically, what, what has happened in that case is moral judgment is, has actually just been deleted from the menu of strategies and we're back to just coalitions. So if so, when you see things if you see like polarization of moral judgments and so that everyone knows that Democrats morally judge this Republicans morally judged that once that becomes sort of common knowledge then those moral judgments have basically become disabled. There now everyone just recognizes that as a coalitional move not as a moral move, and there's no reason for them to become, be more impartial when they hear that because they have no expectation that the other side will be impartial. So yeah, once you're kind of using morals moral rules sort of obviously hypocritically like that, then they failed to serve any purpose. And, and then, then were just reduced to the world that we were in before morality, where it's, we just, all we have is coalitions and then, and, you know, just threats of force to try to come to to resolve a conflict.


Amber Cazzell (00:30:42):

That's interesting. And I'm now curious what your thoughts are about, like some of, some of Linda [inaudible] work on moral convictions because, you know, she would take the stance that people experienced, some of these political issues as rooted in, in moral issues, almost more strongly in some ways, or at least with more fervor. I mean, maybe that's not fair. Right. Cause everybody agrees that murder is bad, but I dunno, like what are, what are people do seem to experience these things as certainly there's evidence that they experienced them as coalitional, but there's also a lot of evidence that seems to suggest that they really experience it as a moral issue.


Peter DeScioli (00:31:35):

Yeah. yeah, so, you know, it's, there's, there's going to be a mixture of both and yet when it comes to like the recent riots, I mean, when it comes to like a blatant murder that everybody saw on a video, I think the moral judgments are, is pretty clear there. And that's also why we're seeing such a overwhelming response of you know, writing things like that is because, because it is pretty clear if it was less clear than you'd see just as much opposition, you know, coming from the other side. So, so the call to to, to protest in this case is especially strong because it is a, you know, it is a clear case because it goes to a fundamental moral rule that that's very stable and that no one can consistently oppose on the other side.


Amber Cazzell (00:32:33):

Okay. So returning back to your theory, one of the pieces I read in your paper that I what's struck by that really.


Peter DeScioli (00:32:45):

So actually, could I just pause you on that? I think I've just doesn't I think this is sort of a maybe like, but this situation is like an exception that kind of proves the rule, really, because if you see like a lot of the, if, if you're, if we're thinking about like this is a conflict about police brutality, and usually the, if you look at people who are typically on the side of the police, they're not typically on the side of the police right now, it doesn't mean that they're completely opposed, but they they're definitely severing that connection. And so just looking around at, you know, different politicians, you're seeing statements that you would not have seen before this. So, so that we are seeing, you know, movement in that direction, maybe people are expecting the movement should be even, you know, more even bigger. But Mmm. But yeah, but we are seeing more movement than we would have seen if it was a less, if it was a more ambiguous case and, and that's exactly what you'd expect with these ideas.


Amber Cazzell (00:33:53):

Yeah. And so you're in, in saying that you're saying the reaction to George Floyd's murder has been, has been to show that morality supersedes coalitional lines,


Peter DeScioli (00:34:08):

Weekends. It is basically the idea. Like you can still see people lining up the way they would normally line up, but they're lining up more weekly than they would have before. Like if they're, if they're on, if they're on the side of like supporting police brutality, basically that at this moment, they're going to do that less you know, with, with less vigor than they would have the day before that happened. Because that side is, is not looking good in this situation. Cause because the violation is so clear cut. And I mean, one obvious one is just that the the officer is charged with homicide. So in many previous cases that didn't happen and that, so that's just like, you know, one and politicians calling for bands on choke holds and things like that who wouldn't have done that the day before. So so I think we are seeing movement, not as much as people once, so that's why it's maybe, you know, that's why people are frustrated, but, but I think if but, but yeah, I, I think you do see moral judgements weakening their their alliances.


Amber Cazzell (00:35:27):

Yeah. Yeah. Interesting. so I wanna, I wanted to return back to what just, you only wrote like a short paragraph about it, but I thought it was a really fascinating point. And it was actually one that I wanted to bring up with Josh green and it didn't wind up happening, but, you know, from a functional role of morality, my mind naturally wants to wander to like a utilitarian framework and thinking like, well, why, why isn't, why doesn't morality seem to boil down to just like pure utilitarianism or utilitarianism and maybe it's characterized sentence. And you had written about just a, a short thing about why utilitarian, we don't see utilitarian thinking as, as like a standard practicing morality. And I'm wondering if you can speak to that on this podcast. I thought it was very enlightening.


Peter DeScioli (00:36:36):

Maybe could you,


Amber Cazzell (00:36:39):

Okay here, here's the paragraph, it's a little dense out of context, but we can unpack it as we talk. It says nonetheless, because people can moralize a range of identifiable actions. They can in principle, moralize nonconsequentialist decisions themselves, as for example, advocated by utilitarian philosophers, that is, it is possible to use the expected consequences of actions to make moral judgments and coordinate side taking. There are several reasons why this approach is not more prevalent first consequentialist behavior, and might not be a category that is sufficiently identifiable to be useful for coordination, perhaps being too high level, compared with more basic categories, such as lying, killing and stealing. Second welfare consequences might be particularly difficult to use for coordination given that they tend to be the basis of the dispute in the first place that is different sites will tend to disagree on the weight to put on each disputants welfare, potentially making welfare judgments ill suited for coming to a consensus in some non consequentialism and moral conscience might be explained as a defensive strategy, which in turn can be explained by the details of the coordination problem, confronting bystanders who choose sides. So that's the paragraph heady, but maybe putting it into just like casual, conversational terms and then unpacking. And now that you've mentioned there potentially more arguments, I'd love to hear. I'd love to hear more.


Peter DeScioli (00:38:00):

Okay. Yeah, yeah. Yeah, it's kind of a complicated issue. But yeah, so so the idea, so moral judgment is very complicated as, as just like that paragraph sounded kinda complicated. But so there's, there's kind of layers of strategy on strategy. And so that makes it difficult to make generalizations because there every strategy has a counter strategy. So what holds for the strategy doesn't hold for the counter strategy, but basically idea is like, so we moralize actions and that's, that's sort of the, what I think is like the most important feature of our moral judgment is that they center around actions. And this is something that people could easily overlook and most moral psychologists and, and theorists have overlooked is, is that actions are so central. And I draw an analogy to this, to a language where verbs are actually the most important words and in our sentences because actions are, are so important nouns are much simpler.


Peter DeScioli (00:39:12):

They just, you know, label an object. But if you want those objects to do something, you need a verb. And so verbs are the most important words in a human language. And similarly actions are the focus of our moral judgments. And this is on an not very consistent with utilitarianism because utilitarianism and consequentialism in general are focused on outcomes. Not did you lie or steal, but did you make people healthier, happier safer than before? And so those are more like adjectives instead of you know, actions. And so, so why isn't our moral judgment focused on adjectives. That would be another way of expressing, you know, the debate between deontologists and consequentialist. And so anyway so, so I kind of have this theory about how, why moral judgment is so focused on actions, why that leads to the conflict between utilitarians and deontologist and but I still have to reconcile the fact that people like mil and Bentham that the utilitarians, they didn't just say, you know, you should do this because it's nice.


Peter DeScioli (00:40:40):

They said you should do it because it's moral. So their mind is still connected. These two things, even though morality is about actions these philosophers, you know, artificially were able to get their minds to hook up these concepts. So that even though they would normally be looking for actions to condemn they're now looking for outcomes and then looking at whatever, whatever action was connected to it, maybe you told the truth, but it hurts someone they'll still condemn you for that. So so they kind of wired up their own minds to do things a little bit differently, and a good theory of moral psychology has to explain why that's even possible. And so that's what, I'm a little bit of what I'm dealing with in that paragraph is saying that well actually one of the properties of our moral judgment is that it's very flexible.


Peter DeScioli (00:41:32):

We can a new action. That's no one's ever seen before and, and decide that that's morally wrong. And if we project that out into the world and announced, and if other people agree with us, then that becomes, that's added to the list of moral wrongs. And we'll all jump on somebody that, that crosses that line. So that's one of the most important properties of moral judgment and that the origin of this is because humans have new and unexpected conflicts. So we might, you know, find a new resource that we had never found before. Well, now we're gonna have new conflicts and we need new moral rules to deal with that. And so our mind has an ability to mint a new moral rules as needed. So using that ability or utilitarian philosopher can just meant a new moral rule in their mind that says you know, if the action leads to the better consequence, then that's permitted and if, at least the worst consequence, then it's, then it's not.


Peter DeScioli (00:42:32):

So that is, you know, conceptually possible for, for a human mind. And then I, then the next part of the paragraph discusses, why we don't all just do that. So, so we need to know why it's possible. Why can utilitarian philosophers say, this is the morally right thing to do, and then why now we need to know why everyone doesn't just do that. What's the, why is that, you know, counterintuitive and not the first thing that we thought of, and also why once they thought of it, why didn't it just spread the way, you know, I phones spread, you know, somebody thought of that and it just everyone adopted it, or so Y you know, after Bentham had this brilliant idea, why didn't it just spread among everyone? We, why didn't we all just recognize this quickly? So what's the, what's the limit there.


Peter DeScioli (00:43:19):

And so the, so the, the the, the main point is that we might specifically be inhibiting considering the consequences, like who's happier or less happy because that's what our Alliance psychology already does. We already are thinking about our own welfare and the welfare of our allies and we're, and that's the calculation we're using to figure out, should I support my friend in this conflict? How, how will this affect them? How much will it affect them? And so that means the other side is doing that is doing a very different math than us to figure this out because they value their ally much more than we do. And so if we, the more attention we pay to the consequences, the more this is just gonna go right, lead us right back to coalitions. So it's, you know, it's conceptually possible to do it. It's just that it's difficult because our Alliance psychology is already focused on people's welfare outcomes. And so meanwhile, their actions are pretty separate from that. So we can, you know, we can identify actions without and separate, separate that from them from the outcome. And that's a trick in itself, but our mind is already got to doing that. That's what, you know, many of our verbs, do they point out the action without saying the consequences of the action?


Amber Cazzell (00:44:49):

Yeah, that's interesting. So, and, and I can see, as I'm speaking with you more and more, how much of your work is rooted in Alliance, building and coalitions. So I'm not familiar. I have not read your papers about coalition formation and friendship formation, but my understanding is that a lot of your recent work has been regarding the formation of friendships more explicitly, is that right?


Peter DeScioli (00:45:20):

Actually that's not really recent work. It was kind of at the same time because I realized I had to understand alliances to understand moral judgment and impartiality was the clue to that because I'm like, well, if this concept is turning off partiality, then that means that you have to understand partiality in order to understand why impartiality could be useful. So yeah, so my early work on friendship was, you know, in, was in my dissertation at the same time as the moral judgment. But but yeah, but I have been also doing, you know, recent experiments, having people choose sides, you know, in the lab. And, and


Amber Cazzell (00:46:00):

Yeah, I'd love to hear about that.


Peter DeScioli (00:46:05):

Yeah. So so I use an economic game, which as, you know, as a common method I, part of my background is in experimental economics. So using economic games to look at people's strategic decisions in the lab and with money at stake, and that's what makes it more of, you know, makes the decisions have actual consequences so that we can see what happens then and compare to just what they might say in a survey that doesn't affect anyone. So anyway, so so you've probably seen things like the public goods game. That's probably the most commonly used to, to study, you know, cooperation in groups. And so so, so a lot of work on that. And so I designed a game to look at Alliance formation, choosing sides that basically takes the ideas from my theories and then kind of makes makes them real, you know, in an economic game.


Peter DeScioli (00:47:12):

So people are actually playing the game that's in the theory. So in the game, there's eight players and two players at a time are chosen to have a conflict over a resource the resources worth a dollar 50 and they're gonna multiple of these conflicts. So every time they win a conflict, they can just pocket a dollar 50. And so two at a time are randomly chosen, and this is supposed to represent the fact that we walk around in everyday life. And then suddenly we disagree with someone we don't necessarily, we didn't plan to disagree with them. We weren't like out to go like, you know, grab their wallet or something. It's just that it just happened to be that you know, I want to watch one TV show and another person in the household wants to watch a different TV show.


Peter DeScioli (00:48:00):

So neither of us chose that, but that's just what happened. And so this is, and so most of our conflicts are not with enemies. They're with they're with our closest in our closest relationships. And so and they're not, and most conflicts don't start with the malicious intention. We, we often accuse others of having a malicious intention, but that's just a strategy that we use in our conflict. That's not actually, you know, what typically leads to conflicts. So so that's what the game represents. And so once these two are, so they just, you know well it's showed up, there's a resource. Only one of them can have it and the way that to try to win this resources, to get others, to choose your side. So there's eight players. So that means there's six others that you could try to recruit to side with you.


Peter DeScioli (00:48:57):

And whoever gets more supporters gets the dollar 50 and in the basic game, there's no discussion. So you can't ask people to choose sides with you. Instead they just choose sides. However they're going to choose, and then they see what happens. But this game is played repeatedly among the same group of eight people, and they know who each player is. They all have a label. Sorry. And so so the way, so in principle, they could form alliances by siding with those who have sided with them in the past. And we also recorded the outcome of every fight, including who chose which side and show that to them on the screen, in case they wanted to use that history to make their decisions. And so in this environment, we just wanted to create a minimal sort of problem of side taking where conflicts arise, others have to choose sides and then see how they, how they do it.


Peter DeScioli (00:50:00):

And what we wanted to know was whether they would form alliances, whether they would side with those who had sided with them previously or whether they might just not really have a preference and not care very much. And just, I dunno, play randomly or another possibility is that they would more go with popularity, like look at who's winning conflicts, maybe side with them. So so anyways, so we looked at those and basically what we found this, that people just formed alliances very quickly you know, right away. And the alliances that they form were also pretty stable. So pretty quickly that you've, you, you know, your player B and you decide you're gonna support Jay, and then Jay supports you and then you guys stick to supporting each other through the course of the, of the experiment. So, you know, so people did this, you know, very quickly, even in this kind of bizarre economic game where just, everyone's just a letter on the screen. And so yeah, so this, so that I think is just, you know, gives us a little bit of controlled laboratory evidence for just how how prone people are to, to form these alliances. And so yeah, that's the, that's the basic,


Amber Cazzell (00:51:28):

Yeah. And so did you start, have you started to add any wrenches to the works, any, any extra things that my influence Alliance formation?


Peter DeScioli (00:51:40):

Not yet, but that is on my wishlist. I mean, so from the start, I wanted to add moral content to the conflicts, but I haven't done that yet. And I knew that that would be difficult. The game is already fairly complicated, you know, with eight players, right. And then, and so the way they have to make their choices is they rank everybody else. So they say their best friend, their second best or third best. And we need to know that because they don't know who the conflicts are going to break out between. So so we need to know, so their first rank tells us who they would support against everybody. The second rank says they would support them against everybody except their first rank. So participants, you know, each realm are able to rank everybody else. And then this is a loyalty ranking.


Peter DeScioli (00:52:30):

So they're ranking their loyalties to everyone, and then they can update those loyalties every, every round after seeing what happened in the previous round who's who sided with who? So the game's already kind of pushing, pushing it on complexity side. So I started there, but on my wishlist for the future is to try to add some moral content and the way this would work would be, so right now they're just two individuals just both want the same resource, but there's no, there's no basis to decide like that one's morally right, or one's morally wrong. But if even if we just created like a fictional story for why they're in this conflict and said, you know, one person tried to steal from the other, well, that would add moral content. And that could even be mixed into more neutral ones where they both arrive to the same AppleTree at the same time.


Peter DeScioli (00:53:26):

But there's only one Apple on it. So, so then if somebody forms an Alliance and they've been, you know, your player B, you've been siding with player J and then all of a sudden, but now player J was wrong. They they're the one that stole, are you going to still stick with your Alliance or are you going to break from alliances with this, just given this, you know, sort of cover story that it's a about who was in the wrong? So yeah, so we so that's, the idea is to, is to start to add some moral content.


Amber Cazzell (00:54:01):

Yeah, really cool. I'll be interested to read that when it comes out. So any, do you, like, what are your, what, what is your thinking moving forward as you're planning on continuing your research trajectory, what are you finding yourself most interested in with regards to, how to apply your theories to just different, different areas of research that are catching your eye?


Peter DeScioli (00:54:33):

You know, so so I'm in a political science department guide. You might've noticed that even though my background is psychology so one thing that I think there's it's, you know very exciting potential is applying moral psychology just to all the different dilemmas of politics. And so I teach a course on moral politics where, where we do that but just, you know, things about taxes or, you know, safety and viruses or voting housing, rent, control medicine food, poverty, jobs, abortion contraception so all of these different, you know, political issues I, I think are ripe for moral psychology too. Yeah. And so moral psychology has gotten into it to some extent, but usually the goal is to understand the judgements themselves, rather than trying to understand judgements about a specific type of policy that, you know, that a voter might actually encounter you know, on about initiative or, or in a political campaign when they're deciding whether to vote for somebody.


Peter DeScioli (00:55:49):

So so anyways, so, so I think there's a lot of and then you know, there's tons of work on the trolley problem and I like the trolley problem. But it's very easy to construct those kinds of dilemmas for any topic. So there's no reason why we have to only focus on that one or, or even just focus on killing. So one, one, a paper that we wrapped up recently looks at debt dilemmas. And so the dilemma here is this is between countries because my coauthor is very interested in countries debts to each other. And then when a financial crisis happens does the country still, you know, need to repay, like how do they trade off their citizens, welfare against the promise, you know, to, to repay. And so that's the same kind of dilemma you've got the content side that would say, you always have to repay a debt, the consequences don't matter.


Peter DeScioli (00:56:50):

And then you've got the consequential aside that says, no, hold on a minute, paying a debt. That's just a, kind of an arbitrary action. So let's, let's, let's add up the consequences to figure out which is actually going to lead to more you know, health and happiness. So so anyway, so we find the same same kind of division that the, we find for trolley problems with us, you know, some people on the consequential side, some people on the content side we also find that they are using that kind of reasoning. So they're not just, you know, viewing it, like all con all consequentialist, which you might've thought, cause it's an economic issue. So you might think, well, economic issues are gonna be more about cost and benefits. But, but we find them using like the same sort of deontic language must and have to, and need to that, that we find in trolley problem.


Peter DeScioli (00:57:50):

And then also saying things like no excuses and it, it, no matter what the concept, so participants, we'll just say these things directly just like conduit. And so anyway, so, so that so yeah, so, you know, applying these ideas to this PR this particular area of international deaths and you know economic crisis is just in yeah, that that's like an example of a plan of applying this to to political issues. So I think there's a ton of work to do on that that could also help to bridge divides between social psychology and political science. So that's one area. I mean, there, I don't know, there's many, there's many others.


Amber Cazzell (00:58:45):

That's so cool. It's, that's really neat. It's fun to hear about. And it's also fun for me to hear a little bit about the policy size side, because I haven't, I don't know that I've, I think I've had political scientists on this podcast before, but it's always been like a double appointment and hasn't specifically centered around political science research, which has been on my radar for awhile, but anyway Peter, thank you so much. I had a lot of fun listening to you and getting to get a deeper understanding of your theory. And I find it just really interesting, the framing intrigues me and has, you know pushed me to reconsider a lot of research in new light. So thank you so much. Thank you so much again.


Peter DeScioli (00:59:41):

Sure. Yeah. Thanks for having me great conversation. And I appreciate always appreciate the chance to discuss these things.

Comments


bottom of page