I’m currently reading Bruce Schneier’s book Liars and Outliers – a fascinating analysis of security, trust, and game theory.
The section I’m on at the moment (I’m only a quarter of the way through) concerns his descriptions of the different kinds of societal pressures which aim to reduce incidents of defection. Defections occur when an individual is faced with a dilemma, and chooses to act in their own immediate interest at the expense of the interest of the wider group or society. That’s a clumsy and simplistic description of what he says but as this isn’t a book review I’m not going spend time trying to do it justice properly. Suffice to say: it really is very interesting – go read it!
Right now I’m faced with a dilemma. Terry is sick. (Not seriously – just an unpleasant cold.) Looking after a sick husband is kind of a drag. He’s all gross and snotty, his IQ has dropped about 15 points, he’s whiny, and he’s not doing anything except sitting on the sofa watching Lesley Nielsen movies and stinking the flat up with Olbas Oil. The rational self interested agent in me would quite like to get out and go and do something fun, leaving him to it.
However, there’s a good chance that I’m going to come down with this too. I might get lucky, and not get it at all, or have something milder, for a shorter period of time. Or I might be unlucky and have something as bad or worse. When I get sick I want to be looked after. I want someone making me soup, and fetching medicine, replenishing tissues and watching films with me which make me feel better (either Fargo or Monsters Inc depending on the severity of the case.) So the expected reciprocity is a substantial factor in resolving my dilemma. If I look after Terry now, he is more likely to look after me later.
Schneier covers moral, reputational and institutional pressures as well as security systems in detail as types of societal pressures which discourage us from acting against the group interest. As I’m only part way through I haven’t finished the sections on institutional pressures or security systems yet. From what I’ve read so far they don’t seem that relevant, so I’ll ignore those.
Reputational pressure does seem relevent. One example given in the book concerns the dilemma of a spouse with a wandering eye. The potential damage to reputation within one’s circle of friends if one is caught philandering can be a powerful motivator. I imagine that if I were to tell Terry I was heading off to a hotel for a few days until his mucus cleared up he would vent his sense of rejection and betrayal with our mutual friends. Assuming they took his side (which in this case is likely) being on the receiving end of whatever group censure they came with up would I imagine be very unpleasant. There might be snide comments, I might be ignored or abjured completely. Even if nothing overt actually happened, being the kind of person I am, I’d be intensely paranoid that they thought worse of me. I don’t want my friends to think I’m a horrible person, and if I abandon Terry in his time of need they will probably find out, so that provides another strong incentive to look after him.
As part of his discussion on Moral pressure, Schneier has this to say:
At the risk of opening a large philosophical can of worms, I’ll venture to say that morals are unique in being the only societal pressure that makes people “want to” behave in the group interest. The other […] mechanisms make them “have to.”
Since philosophical cans of worms are totally my bag, this got me thinking a lot. I disagree with Schneier here, as in my experience the anticipated guilt from ignoring moral pressure can be the same thing as the anticipated rejection from ignoring reputational pressure. Earlier I mentioned that the mere thought that others are thinking ill of me, even if they do nothing to demonstrate that ill thought, can be enough to make me behave in a certain way. Similarly any particular action taken because I know I will feel horribly guilty if I don’t, is me not wanting the disapproval of myself. I don’t know if Schneier’s model is set up to think of a person’s conscience as an external party exerting a reputational pressure.
So I’m also worried about feeling guilty if I don’t look after Terry, which Schneier might think of as a moral pressure, but I think of as an extension of a reputational pressure.
So there are lots of compelling rational reasons why I should decide to look after Terry, even though it can be icky and a bit boring. Is that why I decided that’s what I would do?
Well, no. I don’t feel like I made a decision at all.
Supplying comfort and a small measure of palliative care makes my sick husband feel a bit better, and I am so invested in his well being that him feeling a bit better is enough to make me want to help him any way I can. Perhaps Schneier would account for this by saying that the moral pressure exerted on me from the outset was so great that it doesn’t seem like a dilemma at all. For the avoidance of doubt, it genuinely didn’t seem like a dilemma. My first instinct was to get Terry set up on the sofa with drugs, juice, tissues and blankets, and it was later I started considering the parallels with the societal dilemmas as an academic exercise.
Most of the above can therefore be summed up by saying that I have chosen to look after my sick husband because it feels like the right thing to do, and that decision was made so quickly and perhaps subconsciously that it didn’t feel like a choice at all. With rational afterthought I can see how considerations of reciprocity, social standing and the avoidance of guilt might have contributed to my eventual decision, but I didn’t actually sit and think about all of this; I just got on with it.
At the start of the book Schneier discusses security in various ecosystems, where organisms have evolved to behave in mutually beneficial ways, without decision making or rational thought ever having played a part.
I love my husband very much. I feel it in my proverbial extremities as The Troggs put it. And since I put infinitely more stock in the biochemical explanations for this than silly concepts such as souls and destiny, I am very curious as to whether I made a complex rational decision so quickly that I didn’t even notice I was making it, or if I’m just following the software that helped my species survive and evolve in the first place. After all, looking after my temporarily indisposed hunter gatherer would seem to be advantageous, particularly as I’m also presumably still running the software telling me that he will be a good genetic match.
However since we’ve agreed that our respective genes terminate here, I wonder if that will stop someday.