It’s called the Soviet-HARVARD illusion for a reason and that reason is Joshua Greene.

Nassim Taleb defines the Soviet-Harvard illusion as follows:

Thinking that the reasons for things are, by default, accessible to you. Also called Naive Rationalism.i

When I first read this definition, I didn’t see the connection between Harvard and such top-down idealism socialism. It’s since become far more apparent to me, but I’ve yet to make an example of one of its proponents. I’m going to remedy this using this week’s EconTalk episode, wherein interviewer Russ Robertsii chats with Joshua Greene, Professor of Psychology and Director of the Moral Cognition Lab at Harvard University.

So for your englightenment and entertainment, let’s look at a prime example of the brokenness of Harvard academia, this is gonna be a long one:

Greene: What is morality, to begin with? And what I think, and a lot of other recent commentators and some people in some sense going all the way back to Charles Darwin think morality is fundamentally about is our social nature. And more specifically about cooperation: that is, what we call morality is really a suite of psychological tendencies and capacities that allow us to live successfully in groups, that allow us to reap the advantages of cooperation. But these tendencies that make up morality come primarily in the form of emotional responses that drive social behavior and that respond to other people’s social behavior.

This is, of course, side-splittingly hilarious and a perfect example of the aforementioned illusion. Not only does Greene misrepresent social interactions, group dynamics, and the fundament of cooperation as something other than survival– and/or power-driven considerations, he decides to wave his magic wand and make morality something other than, as MP so neatly put it the other day… “What should be done.”iii

But I won’t dwell on this point too much because we’ve a ways to go yet.

Greene: So now we have in this common space all of these different tribes that are cooperative in different ways, cooperative on different terms, with different leaders, with different ideals, with different histories, all trying to exist in the same space. And this is the modern tragedy. This is the modern moral problem. That is, it’s not a problem of turning a bunch of ‘me-s’ into an ‘us.’ That’s the basic problem of the tragedy of the commons. It’s about having a bunch of different us-es all existing in the same place, all moral in their own way, but with different conceptions of what it means to be moral.

Greene should drop by La Serenissima sometime to see what a cosmopolitan, diverse, and functioning(!) world looks like. Hint: the common currencies are logic and capital, as they could only be.

The “modern moral problem” is not and cannot be resolved by catering to the whimsy of the lowest common denominator. Problems can only be solved by the best among us, the shepherds. Sure, we hope they’re benevolent, but that’s quite aside from any discussion related to their wisdom and efficacy.

Greene: And so, if our basic psychology does a pretty good job of solving the me-versus-us problem of having basic cooperation within a group, the modern problem, both I think philosophically and psychologically is: What kind of a system and what kind of thinking do we need to regulate life on those new pastures of the modern world, where we have many different tribes with many different terms of cooperation, many different moral systems?

This type of thinking, that thinking itself is something to be resolved on a large scale, is pure Stalinism. It’s the same braindamaged notion that there’s this “new world” that needs “new solutions,” when in fact there’s only the same old, very wise, very established world there’s always been, and in which the diversity of solutions is a bounded and well explored set.

Reading history to find these solutions and viewing the future with the subtraction of what is fragile rather than addition of neomaniacal madness is where wisdom lies. We don’t need to “regulate life on those new pastures of the modern world,” we need to get out of our own way.

Greene: We have two kinds of problems; we also have two kinds of thinking. And that the our gut reactions, our intuitions, what I call our automatic settings, which I’ll explain in a moment, do a good job of solving the original tragedy of the commons, but they create the problem of the problem of common-sense morality. That our gut reactions about how we ought to live make it harder for us to live in many ways in a pluralistic world.

If the way in which people naturally tend to act and react is presenting issues in a given scenario, what’s the path of least resistance? What’s the +EV strategy? Changing people or changing the environment? Humans are nothing if not masters at changing their physical environment to suit their tastes and needs, so it seems an awful lot more sensible that the environment should change first and people second.

Besides, the notion that a pluralistic society is in some fashion morally superior to a culturally uniform society is absurdity of the highest order. It’s a fashion, nothing more. Like bell-bottom jeans or Ugg boots. Even if there’s such a thing as kindness and charity, there’s no such thing moral superiority, there’s only what works and what doesn’t. We must do more of what works and less of what doesn’t, it’s as simple as that. That’s what survival and success mean, not this “let’s solve the woes of humanity by sending everyone to Harvard” stuff. That’s little more than a vain attempt to keep kids out of the army and off the street.

So if you can make a pluralistic society work, great, more power to you. If not, try another angle. If that one doesn’t work, try another still.

Greene: With this idea in mind of the tension between our automatic settings and our manual mode, our gut reaction and our slow, deliberate thinking, all introduce, as you said, the Trolley Dilemma. This is the philosophical problem that got me interested, well, really got me started in my research as a scientist. So, one version of the Trolley case goes like this. You’ve got a trolley headed towards 5 people, and you can save them but they are going to die if you don’t do anything. If you hit a switch you can turn the trolley away from the five and onto another track, but unfortunately there’s still 1 person there. And if you ask most people, ‘Is it okay to turn the trolley away from the 5 and have it run over the 1 person?’ depending on who you ask and how you ask it, about 90% of people will say, ‘Yes.’

Only 90% eh? How about this more active version of the experiment…

Greene: Is it okay to push the guy off the footbridge, use him as a trolley stopper to save the 5 people? Most people say no. There are some populations where people are more likely to say yes. But in general, take an American sample, somewhere between about 10% and 35% of people will say that it’s okay to push the guy off the footbridge; most people will say that it’s not okay. So, interesting question: What’s going on? Why do we say that it’s okay to trade 1 life for 5 when you can hit a switch that will divert the trolley away from 5 and onto 1, but it’s not okay to push the guy off the footbridge–even if we assume that this is going to work and if we assume that there’s no other way to achieve this worthy goal. Most people still say that it’s wrong.

Hm.. Less than 35% of participants would push a guy off a footbridge to save 5 others? Appreciating that the participants are probably idealistic college students, this is still an alarmingly low number of people who’re capable of calculating expected value. Assuming that people can accurately predict what they’ll do in such a scenario, a stretch to be sure, it’s interesting to note how many participants are paralysed by their collective notions of “meanness.”

As if large nation states weren’t fucked badly enough, this distinct lack of cannon fodder basically seals their fate. No war, no democratic state. Simple as that.

Greene: We’re coming up on a decade and a half of research on or stemming from this moral dilemma. And we’ve learned a lot. It seems that it’s primarily an emotional response to that physical action of pushing the guy off the footbridge. And you can see, for example, in a part of the brain called the amygdala, which you might think of as a mammal’s early-warning alarm system that something may be bad, needs attention, maybe not a good idea–you see that alarm bell going off in this basic part of the mammalian emotional brain. And the strength of that signal is correlated with the extent to which people say that it’s wrong to push the guy off the footbridge or whatever it is. You also see increased activity in the dorsolateral prefrontal cortex, which is the part of the brain that’s most closely associated with explicit reasoning, or anything that really requires a kind of mental effort, like remembering a phone number or resisting an impulse of some kind or explicitly applying a behavioral rule. That’s sort of the seat of manual mode.

Because blood flows to the amygdala and dorsolateral prefrontal cortex, something must be happening there! Such potato scientists, these Harvardites.

Honestly, looking at brain scans might be “better than nothing” but this is like a professor of literature studying the ink on the page. The map is not the territory and just because you have a technology that makes pictures, it doesn’t mean you know what those pictures mean. This is really the essence of Taleb’s term, which Greene plays up almost comically.

Greene: Going back to the tragedy of common-sense morality is you’ve got all these different tribes with all of these different values based on their different ways of life. What can they do to get along? And I think that the best answer that we have is–well, let’s back up. In order to resolve any kind of tradeoff, you have to have some kind of common metric. You have to have some kind of common currency. And I think that what utilitarianism, whether it’s the moral truth or not, is provide a kind of common currency. So, what is utilitarianism? It’s basically the idea that–it’s really two ideas put together. One is the idea of impartiality. That is, at least as social decision makers, we should regard everybody’s interests as of equal worth. Everybody counts the same. And then you might say, ‘Well, but okay, what does it mean to count everybody the same? What is it that really matters for you and for me and for everybody else?’ And there the utilitarian’s answer is what is sometimes called, somewhat accurately and somewhat misleadingly, happiness. But it’s not really happiness in the sense of cherries on sundaes, things that make you smile. It’s really the quality of conscious experience.

“The quality of conscious experience…” Jesus. Sounds a hell of a lot like “feelings,” that personal and necessarily apolitical matter, to me.

Seriously, feelings don’t matter. They can be drugged into being, they can be twisted and turned towards any end, and they can be all too fleeting. Feelings, and their companion emotions, are therefore to be relied on about as much as you rely on a shipping window from the Postal Corporation of Jamaica. You sorta want to believe them but, knowing that everything runs on “island time,” if it’s really important, stick with UPS, that is, logic.

Greene: We all have our pleasures and pains, and as a moral philosophy we should all count equally. And so a good standard for resolving public disagreements is to say we should go with whatever option is going to produce the best overall experience for the people who are affected. Which you can think of as shorthand as maximizing happiness–although I think that that’s somewhat misleading. And the solution has a lot of merit to it. But it also has endured a couple of centuries of legitimate criticism. And one of the biggest criticisms–and now we’re getting back to the Trolley cases, is that utilitarianism doesn’t adequately account for people’s rights.

HAHAHAHA!!! Ok, I need to stop and breathe for a second…

If our “moral philosophy” is pure socialism, that is, everyone is equal just because we say so, then we’re in deep, deep trouble… but obviously no more than every socialist implementation.

Again, producing the best overall “experience” is so far from a moral imperative, much less a moral good, it’s ridiculous. If society is going to legitimately concern itself with personal experiences, where could it possibly end? I’ll tell you where: we’ll end up taking surveys of mircoaerophilic soil bacteria in garbage dumps and charting it over time to make sure that they have “the perfect conditions for a happy life” or some such nonsense. If all people matter then why not all fetuses and corpses, and if all fetuses and corpses matter, why not plants and animals, and from there patent, ear-biting insanity is just a quick hop, skip, and a jump.

Positive experiences are earned, not doled out like candy on Halloween. It’s negative experiences that are typically doled out, top-down, as Greene is suggesting, directly contradicting any hope his naive idealism has of every functioning in the real world.

Not that Greene wants to function in the real world. He, like Obama, understands how the world actually works and completely expects that the world will conform to his magical ideas. Because reasons.

Roberts: So, here we are in the United States. We’re in this pasture. We’re all here together. We have very different philosophies. Unfortunately, we don’t really have–not only do we disagree, even if we agreed, you and I, on what the right, say, way to adjudicate our dispute, we don’t really have a mechanism for implementing it. We think we do. We call it democracy. But it’s a very imperfect mechanism that often exploits our differences for the benefit and gain of individuals. So it’s not obvious to me that it’s even a good idea to say, Let’s pretend we could decide what is the greatest happiness across these 330 million people, let alone the 7 billion, and then hope that somehow it’ll get implemented. Is that really a practical solution to our political problems?
Greene: No, I don’t think that there is any alternative. I think that we are living someone’s attempts to adjudicate these tradeoffs of values, and we can either just accept what the powers that be put in front of us, or we can vote our conscience and try to change them or vote our conscience and say, yes I endorse this. I think that what you’re objecting to is the difficulty of the problem, not an inherent problem with the solution, if you want to call it that, that I’m proposing.

Of course there’s no alternative to democracy!iv How could there be when your head is buried eleventy feet in the ground and all you can smell is the faint whiff of the endless and entirely meaningless letters behind your name.

So ya, “vote your conscious” because that’s how change happens. That’s what Genghis did, that’s what Lenin did, that’s what you should do. It’s the only possible way!

And, to close, why wouldn’t a Soviet-Harvardite take any opportunity to turn a moral discussion into one about taxes…

Greene: So I think it’s easier to think about these things with a concrete example. So, take the case of raising taxes on the wealthiest Americans. Now, let’s suppose that I know that this is controversial. But let’s suppose that government spending can provide good stimulus to the economy and can increase employment and make things better off for the people who are employed as a result. Okay, so you have to do a tradeoff. You would have to say, How much do the wealthiest people lose by having their incomes reduced by some amount from someone who is making half a million dollars a year, and they might pay, instead of paying 30% in taxes they’d pay 40% or something like that, versus the benefits that go to people who now have jobs as a result of expansion of the public sector, or children who have a better shot at living the good life because of increased commitment to early childhood education, etc. There are a lot of empirical assumptions here or questions here. But if we can at least agree on the empirics, then there’s the question of, Okay, is this tradeoff worth it? I don’t think there’s any way to avoid asking that question, and I think that in a lot of these cases, it’s actually pretty clear–that, for example, taking people who are already very wealthy and reducing their income somewhat doesn’t really do much to their happiness. Whereas if you provide opportunities to people at the bottom of the scale, that actually can make an enormous difference in their lives. So, you know, I think that the alternative is to just say, let it just evolve the way it evolves without consciously thinking about this as a social problem. But I don’t think that that’s a better alternative.

Props to Russ for countering the above point by mentioning the 10-20% boundaries for charity as described in Jewish Law. Really makes the Tithe³ seem modest, neh?

So anyways, that’s the Soviet-Harvard illusion.

It’s a thing.

___ ___ ___

  1. Naive Rationalism is closely related to Naive Interventionism, which Taleb defines as:

    Intervention with disregard to iatrogenics. The preference, even obligation, to “do something” over doing nothing. While this instinct can be beneficial in emergency rooms or ancestral environments, it hurts in others in which there is an “expert problem”.

    This, I define as the opposite of decency.

  2. I’ve used Russ’ weekly conversations to launch tirades before and I doubt if this will be the last time either. For earlier examples, see Y Combinator: The American Idol of Venture Capital and Jeffrey Sachs: The RSM of African Interventionism.
  3. mircea_popescu: I don’t per se care what yu do, it’s a moral discussion. “What should be done.”
    adlai: Right. If I get a well-capitalized account profiting from this, what’s morally wrong with that? You could point out that I’ve fallen short of pure greed by not accepting investment, paying dividends, and retaining some fee as a personal profit.
    mircea_popescu: Nothing at all. (Note that i use “moral” properly, not in the bastardized form common among the plethora of would-be stalins without the ballsack.) […] (and generally, moral thought proceeds quite strictly : 1) is this a good thing or a bad thing ; 2) how much of it ? there’s no overalpping these stages)
  4. Because who ever heard of de Tocqueville ?