It’s called the Soviet-HARVARD illusion for a reason and that reason is Joshua Greene.

Nassim Taleb defines the Soviet-Harvard illusion as follows:

Thinking that the reasons for things are, by default, accessible to you. Also called Naive Rationalism.i

When I first read this definition, I didn’t see the connection between Harvard and such top-down idealism socialism. It’s since become far more apparent to me, but I’ve yet to make an example of one of its proponents. I’m going to remedy this using this week’s EconTalk episode, wherein interviewer Russ Robertsii chats with Joshua Greene, Professor of Psychology and Director of the Moral Cognition Lab at Harvard University.

So for your englightenment and entertainment, let’s look at a prime example of the brokenness of Harvard academia, this is gonna be a long one:

Greene: What is morality, to begin with? And what I think, and a lot of other recent commentators and some people in some sense going all the way back to Charles Darwin think morality is fundamentally about is our social nature. And more specifically about cooperation: that is, what we call morality is really a suite of psychological tendencies and capacities that allow us to live successfully in groups, that allow us to reap the advantages of cooperation. But these tendencies that make up morality come primarily in the form of emotional responses that drive social behavior and that respond to other people’s social behavior.

This is, of course, side-splittingly hilarious and a perfect example of the aforementioned illusion. Not only does Greene misrepresent social interactions, group dynamics, and the fundament of cooperation as something other than survival- and/or power-driven considerations, he decides to wave his magic wand and make morality something other than, as MP so neatly put it the other day… “What should be done.”iii

But I won’t dwell on this point too much because we’ve a ways to go yet.

Greene: So now we have in this common space all of these different tribes that are cooperative in different ways, cooperative on different terms, with different leaders, with different ideals, with different histories, all trying to exist in the same space. And this is the modern tragedy. This is the modern moral problem. That is, it’s not a problem of turning a bunch of ‘me-s’ into an ‘us.’ That’s the basic problem of the tragedy of the commons. It’s about having a bunch of different us-es all existing in the same place, all moral in their own way, but with different conceptions of what it means to be moral.

Greene should drop by La Serenissima sometime to see what a cosmopolitan, diverse, and functioning(!) world looks like. Hint: the common currencies are logic and capital, as they could only be.

The “modern moral problem” is not and cannot be resolved by catering to the whimsy of the lowest common denominator. Problems can only be solved by the best among us, the shepherds. Sure, we hope they’re benevolent, but that’s quite aside from any discussion related to their wisdom and efficacy.

Greene: And so, if our basic psychology does a pretty good job of solving the me-versus-us problem of having basic cooperation within a group, the modern problem, both I think philosophically and psychologically is: What kind of a system and what kind of thinking do we need to regulate life on those new pastures of the modern world, where we have many different tribes with many different terms of cooperation, many different moral systems?

This type of thinking, that thinking itself is something to be resolved on a large scale, is pure Stalinism. It’s the same braindamaged notion that there’s this “new world” that needs “new solutions,” when in fact there’s only the same old, very wise, very established world there’s always been, and in which the diversity of solutions is a bounded and well explored set.

Reading history to find these solutions and viewing the future with the subtraction of what is fragile rather than addition of neomaniacal madness is where wisdom lies. We don’t need to “regulate life on those new pastures of the modern world,” we need to get out of our own way.

Greene: We have two kinds of problems; we also have two kinds of thinking. And that the our gut reactions, our intuitions, what I call our automatic settings, which I’ll explain in a moment, do a good job of solving the original tragedy of the commons, but they create the problem of the problem of common-sense morality. That our gut reactions about how we ought to live make it harder for us to live in many ways in a pluralistic world.

If the way in which people naturally tend to act and react is presenting issues in a given scenario, what’s the path of least resistance? What’s the +EV strategy? Changing people or changing the environment? Humans are nothing if not masters at changing their physical environment to suit their tastes and needs, so it seems an awful lot more sensible that the environment should change first and people second.

Besides, the notion that a pluralistic society is in some fashion morally superior to a culturally uniform society is absurdity of the highest order. It’s a fashion, nothing more. Like bell-bottom jeans or Ugg boots. Even if there’s such a thing as kindness and charity, there’s no such thing moral superiority, there’s only what works and what doesn’t. We must do more of what works and less of what doesn’t, it’s as simple as that. That’s what survival and success mean, not this “let’s solve the woes of humanity by sending everyone to Harvard” stuff. That’s little more than a vain attempt to keep kids out of the army and off the street.

So if you can make a pluralistic society work, great, more power to you. If not, try another angle. If that one doesn’t work, try another still.

Greene: With this idea in mind of the tension between our automatic settings and our manual mode, our gut reaction and our slow, deliberate thinking, all introduce, as you said, the Trolley Dilemma. This is the philosophical problem that got me interested, well, really got me started in my research as a scientist. So, one version of the Trolley case goes like this. You’ve got a trolley headed towards 5 people, and you can save them but they are going to die if you don’t do anything. If you hit a switch you can turn the trolley away from the five and onto another track, but unfortunately there’s still 1 person there. And if you ask most people, ‘Is it okay to turn the trolley away from the 5 and have it run over the 1 person?’ depending on who you ask and how you ask it, about 90% of people will say, ‘Yes.’

Only 90% eh? How about this more active version of the experiment…

Greene: Is it okay to push the guy off the footbridge, use him as a trolley stopper to save the 5 people? Most people say no. There are some populations where people are more likely to say yes. But in general, take an American sample, somewhere between about 10% and 35% of people will say that it’s okay to push the guy off the footbridge; most people will say that it’s not okay. So, interesting question: What’s going on? Why do we say that it’s okay to trade 1 life for 5 when you can hit a switch that will divert the trolley away from 5 and onto 1, but it’s not okay to push the guy off the footbridge–even if we assume that this is going to work and if we assume that there’s no other way to achieve this worthy goal. Most people still say that it’s wrong.

Hm.. Less than 35% of participants would push a guy off a footbridge to save 5 others? Appreciating that the participants are probably idealistic college students, this is still an alarmingly low number of people who’re capable of calculating expected value. Assuming that people can accurately predict what they’ll do in such a scenario, a stretch to be sure, it’s interesting to note how many participants are paralysed by their collective notions of “meanness.”

As if large nation states weren’t fucked badly enough, this distinct lack of cannon fodder basically seals their fate. No war, no democratic state. Simple as that.

Greene: We’re coming up on a decade and a half of research on or stemming from this moral dilemma. And we’ve learned a lot. It seems that it’s primarily an emotional response to that physical action of pushing the guy off the footbridge. And you can see, for example, in a part of the brain called the amygdala, which you might think of as a mammal’s early-warning alarm system that something may be bad, needs attention, maybe not a good idea–you see that alarm bell going off in this basic part of the mammalian emotional brain. And the strength of that signal is correlated with the extent to which people say that it’s wrong to push the guy off the footbridge or whatever it is. You also see increased activity in the dorsolateral prefrontal cortex, which is the part of the brain that’s most closely associated with explicit reasoning, or anything that really requires a kind of mental effort, like remembering a phone number or resisting an impulse of some kind or explicitly applying a behavioral rule. That’s sort of the seat of manual mode.

Because blood flows to the amygdala and dorsolateral prefrontal cortex, something must be happening there! Such potato scientists, these Harvardites.

Honestly, looking at brain scans might be “better than nothing” but this is like a professor of literature studying the ink on the page. The map is not the territory and just because you have a technology that makes pictures, it doesn’t mean you know what those pictures mean. This is really the essence of Taleb’s term, which Greene plays up almost comically.

Greene: Going back to the tragedy of common-sense morality is you’ve got all these different tribes with all of these different values based on their different ways of life. What can they do to get along? And I think that the best answer that we have is–well, let’s back up. In order to resolve any kind of tradeoff, you have to have some kind of common metric. You have to have some kind of common currency. And I think that what utilitarianism, whether it’s the moral truth or not, is provide a kind of common currency. So, what is utilitarianism? It’s basically the idea that–it’s really two ideas put together. One is the idea of impartiality. That is, at least as social decision makers, we should regard everybody’s interests as of equal worth. Everybody counts the same. And then you might say, ‘Well, but okay, what does it mean to count everybody the same? What is it that really matters for you and for me and for everybody else?’ And there the utilitarian’s answer is what is sometimes called, somewhat accurately and somewhat misleadingly, happiness. But it’s not really happiness in the sense of cherries on sundaes, things that make you smile. It’s really the quality of conscious experience.

“The quality of conscious experience…” Jesus. Sounds a hell of a lot like “feelings,” that personal and necessarily apolitical matter, to me.

Seriously, feelings don’t matter. They can be drugged into being, they can be twisted and turned towards any end, and they can be all too fleeting. Feelings, and their companion emotions, are therefore to be relied on about as much as you rely on a shipping window from the Postal Corporation of Jamaica. You sorta want to believe them but, knowing that everything runs on “island time,” if it’s really important, stick with UPS, that is, logic.

Greene: We all have our pleasures and pains, and as a moral philosophy we should all count equally. And so a good standard for resolving public disagreements is to say we should go with whatever option is going to produce the best overall experience for the people who are affected. Which you can think of as shorthand as maximizing happiness–although I think that that’s somewhat misleading. And the solution has a lot of merit to it. But it also has endured a couple of centuries of legitimate criticism. And one of the biggest criticisms–and now we’re getting back to the Trolley cases, is that utilitarianism doesn’t adequately account for people’s rights.

HAHAHAHA!!! Ok, I need to stop and breathe for a second…

If our “moral philosophy” is pure socialism, that is, everyone is equal just because we say so, then we’re in deep, deep trouble… but obviously no more than every socialist implementation.

Again, producing the best overall “experience” is so far from a moral imperative, much less a moral good, it’s ridiculous. If society is going to legitimately concern itself with personal experiences, where could it possibly end? I’ll tell you where: we’ll end up taking surveys of mircoaerophilic soil bacteria in garbage dumps and charting it over time to make sure that they have “the perfect conditions for a happy life” or some such nonsense. If all people matter then why not all fetuses and corpses, and if all fetuses and corpses matter, why not plants and animals, and from there patent, ear-biting insanity is just a quick hop, skip, and a jump.

Positive experiences are earned, not doled out like candy on Halloween. It’s negative experiences that are typically doled out, top-down, as Greene is suggesting, directly contradicting any hope his naive idealism has of every functioning in the real world.

Not that Greene wants to function in the real world. He, like Obama, understands how the world actually works and completely expects that the world will conform to his magical ideas. Because reasons.

Roberts: So, here we are in the United States. We’re in this pasture. We’re all here together. We have very different philosophies. Unfortunately, we don’t really have–not only do we disagree, even if we agreed, you and I, on what the right, say, way to adjudicate our dispute, we don’t really have a mechanism for implementing it. We think we do. We call it democracy. But it’s a very imperfect mechanism that often exploits our differences for the benefit and gain of individuals. So it’s not obvious to me that it’s even a good idea to say, Let’s pretend we could decide what is the greatest happiness across these 330 million people, let alone the 7 billion, and then hope that somehow it’ll get implemented. Is that really a practical solution to our political problems?
Greene: No, I don’t think that there is any alternative. I think that we are living someone’s attempts to adjudicate these tradeoffs of values, and we can either just accept what the powers that be put in front of us, or we can vote our conscience and try to change them or vote our conscience and say, yes I endorse this. I think that what you’re objecting to is the difficulty of the problem, not an inherent problem with the solution, if you want to call it that, that I’m proposing.

Of course there’s no alternative to democracy!iv How could there be when your head is buried eleventy feet in the ground and all you can smell is the faint whiff of the endless and entirely meaningless letters behind your name.

So ya, “vote your conscious” because that’s how change happens. That’s what Genghis did, that’s what Lenin did, that’s what you should do. It’s the only possible way!

And, to close, why wouldn’t a Soviet-Harvardite take any opportunity to turn a moral discussion into one about taxes…

Greene: So I think it’s easier to think about these things with a concrete example. So, take the case of raising taxes on the wealthiest Americans. Now, let’s suppose that I know that this is controversial. But let’s suppose that government spending can provide good stimulus to the economy and can increase employment and make things better off for the people who are employed as a result. Okay, so you have to do a tradeoff. You would have to say, How much do the wealthiest people lose by having their incomes reduced by some amount from someone who is making half a million dollars a year, and they might pay, instead of paying 30% in taxes they’d pay 40% or something like that, versus the benefits that go to people who now have jobs as a result of expansion of the public sector, or children who have a better shot at living the good life because of increased commitment to early childhood education, etc. There are a lot of empirical assumptions here or questions here. But if we can at least agree on the empirics, then there’s the question of, Okay, is this tradeoff worth it? I don’t think there’s any way to avoid asking that question, and I think that in a lot of these cases, it’s actually pretty clear–that, for example, taking people who are already very wealthy and reducing their income somewhat doesn’t really do much to their happiness. Whereas if you provide opportunities to people at the bottom of the scale, that actually can make an enormous difference in their lives. So, you know, I think that the alternative is to just say, let it just evolve the way it evolves without consciously thinking about this as a social problem. But I don’t think that that’s a better alternative.

Props to Russ for countering the above point by mentioning the 10-20% boundaries for charity as described in Jewish Law. Really makes the Tithe³ seem modest, neh?

So anyways, that’s the Soviet-Harvard illusion.

It’s a thing.

___ ___ ___

  1. Naive Rationalism is closely related to Naive Interventionism, which Taleb defines as:

    Intervention with disregard to iatrogenics. The preference, even obligation, to “do something” over doing nothing. While this instinct can be beneficial in emergency rooms or ancestral environments, it hurts in others in which there is an “expert problem”.

    This, I define as the opposite of decency. []

  2. I’ve used Russ’ weekly conversations to launch tirades before and I doubt if this will be the last time either. For earlier examples, see Y Combinator: The American Idol of Venture Capital and Jeffrey Sachs: The RSM of African Interventionism. []
  3. mircea_popescu: I don’t per se care what yu do, it’s a moral discussion. “What should be done.”
    adlai: Right. If I get a well-capitalized account profiting from this, what’s morally wrong with that? You could point out that I’ve fallen short of pure greed by not accepting investment, paying dividends, and retaining some fee as a personal profit.
    mircea_popescu: Nothing at all. (Note that i use “moral” properly, not in the bastardized form common among the plethora of would-be stalins without the ballsack.) […] (and generally, moral thought proceeds quite strictly : 1) is this a good thing or a bad thing ; 2) how much of it ? there’s no overalpping these stages) []
  4. Because who ever heard of de Tocqueville ? []

25 thoughts on “It’s called the Soviet-HARVARD illusion for a reason and that reason is Joshua Greene.

  1. majamalu says:

    “there’s no such thing moral superiority, there’s only what works and what doesn’t”

    Works for whom? That’s a dangerous notion. Remember: from Himmler perspective, gas chambers worked fine.

    • from Himmler perspective, gas chambers worked fine.

      Not entirely. Himmler and his crew didn’t last a quarter century because they violated the oldest and most sacrosanct law in all of statecraft: Keep the jooz so you can keep the economy. Keep the economy so you can keep the state.

      Himmler et al. broke this law and failed accordingly, so his “morality” didn’t work and therefore doesn’t fulfil the requirement of an actual morality. While there’s more than one way to skin the morality cat, the criteria of working and continuing to work is paramount and non-negotiable. No working, no morality.

      Whether dysfunctional moralities are “superior” or “inferior” is therefore unaddressable in the real world. Only in a parallel universe can they be compared with one another… but they can never be compared to real moralities in the real world.

  2. majamalu says:

    Moral rules are not superior or inferior, nor functional or dysfunctional. Moral rules are valid or invalid. For example, if I say that stealing is ok (morally acceptable) because private property should be abolished, I’m falling into a contradiction: the moment I steal something, I’m taking possession of that thing (thus acting against the rule I put forward).

    We are not trained to identify invalid moral rules; we are trained (by immoral institutions) to believe that morality is a matter of opinion. That’s why I think your mistake is a dangerous one: it is the root of moral relativism (which feeds violent institutions).

    • Moral rules are not superior or inferior, nor functional or dysfunctional. Moral rules are valid or invalid.

      Leaving the blank-eriority part aside, I’m not sure I see the distinction between logic and function here. Logic must be based on premises, premises that work, hence, a valid morality should also be a functional morality. Or am I missing something here?

      For example, if I say that stealing is ok (morally acceptable) because private property should be abolished, I’m falling into a contradiction: the moment I steal something, I’m taking possession of that thing (thus acting against the rule I put forward).

      Heh, no one is making this claim as you’ve presented it. But even if someone claimed that “stealing is ok” and left it at that, “stealing” still needs to be defined. My sense of the matter is that stealing is most, and perhaps only, egregious when the recipient is unconscious of the theft, e.g. taxation and inflation. When the recipient is cognisant of the theft, he is expected to defend himself and his honour to the best of his ability. Should he successfully defend himself, then there’s no theft to worry about, but should he lose the battle, he ought to recognise that he was, and possibly is, inferior to his attacker. Private property is to be fought for and defended, and it’s really only the losers of wars who claim “theft.”

      We are not trained to identify invalid moral rules; we are trained (by immoral institutions) to believe that morality is a matter of opinion.

      I’m not sure who “we” is here but it can’t possibly be everyone and it certainly doesn’t include me. While there are plenty of people masquerading about with empty, “progressive” moralities, there will always be those of us with a better sense of, in the words of Reb Tevye, tradition. Long may it be so.

      That’s why I think your mistake is a dangerous one: it is the root of moral relativism (which feeds violent institutions).

      What, you’re afraid of violence now?? While I’m not one to advocate for mass and indiscriminate murders, a little discipline goes a long way. And yes, sometimes that means raising a hand.

  3. majamalu says:

    When you “raise your hand” (except in self defense, of course), you are admitting your inability to control your irrational impulses, to argue logically, to persuade, to negotiate, and yes, you are legitimizing and contributing to the power of violent institutions.

    Violent institutions need legitimacy more than they need armies. Without legitimacy, they can’t get people to approve things like mass murders.

    • When you “raise your hand” (except in self defense, of course), you are admitting your inability to control your irrational impulses,

      Says who? What’s irrational about discipline? Seems to me that it’s on the basis of discipline that most everything of lasting value has been created in this world.

      to argue logically, to persuade, to negotiate, and yes, you are legitimizing and contributing to the power of violent institutions.

      While I’m all for logical, clear-minded debate, it’s not for everyone and therefore not a useful tool in every situation. We aren’t all rational actors optimising EV, no matter how nice it’d be if we were. Productive institutions require multiple means of extracting value from their members, one of which is language, another of which is physical discipline. Just the way people are.

      Violent institutions need legitimacy more than they need armies. Without legitimacy, they can’t get people to approve things like mass murders.

      “People” don’t need to approve anything. They need to be productive in order to have purpose. It’d really be best if “the people” stayed out of politics altogether, lest we have another bloody century like the last.

  4. Alrenous says:

    Still not cynical enough.

    Assuming that people can accurately predict what they’ll do in such a scenario, a stretch to be sure, it’s interesting to note how many participants are paralysed by their collective notions of “meanness.”

    Assuming they can predict…and will honestly tell you.
    1.1. They can’t know.
    1.2. They can know, but: rational ignorance. You ever personally encountered a trolley problem?
    1.3. They do know, but lie about it.
    1.4 They could know, might want to know, but aren’t going to be conscientious about answering random interview questions.

    2.1 The life of five strangers is as irrelevant as the one stranger. What isn’t irrelevant is having their friends find out they pushed a guy to his death. Humans are supposed to have or fake barriers against random homicide. Imagine instead a cute chick weepily declaring they should’ve but “Couldn’t, just couldn’t!” Which makes more friends, you think?
    2.2 If the fat one is their daughter, no number of strangers will ever motivate them to push, and that’s a good thing. If their daughter is on the track, the fat guy is going off the bridge before they even consciously know what’s going on. As per 1, even if they know this reality, they can’t possibly say it out loud in modern times.
    2.3 The calculation then becomes whether they want to manipulate the interviewer or not.

    Seriously, feelings don’t matter.

    This is a disempowering belief. Its purpose is to make it easier for people like me to manipulate you, among other flaws.

    Not that Greene wants to function in the real world.

    I very much doubt Greene alieves a single word of what he’s saying. It’s there to signal that he’s kind and thoughtful, but he wouldn’t push his daughter onto the trolley any more than you would. He is not a scholar. From which we learn Roberts is not a scholar either, else catastrophically incompetent. Presumably by chance, he had Bueno de Mesquita on the show a few times, and Bruce is a scholar, so they’re worthwhile.

    • Hey, no one but no one accuses me of not being cynical enough! ;)

      Assuming they can predict…and will honestly tell you.
      1.1. They can’t know.
      1.2. They can know, but: rational ignorance. You ever personally encountered a trolley problem?
      1.3. They do know, but lie about it.
      1.4 They could know, might want to know, but aren’t going to be conscientious about answering random interview questions.

      All valid points. Plenty of epistemic opacity in such a scenario. And really, why would they reveal themselves to be “monsters” to some judgemental stranger with a pen and pad? Cui bono? Not the interviewee.

      2.1 The life of five strangers is as irrelevant as the one stranger. What isn’t irrelevant is having their friends find out they pushed a guy to his death. Humans are supposed to have or fake barriers against random homicide. Imagine instead a cute chick weepily declaring they should’ve but “Couldn’t, just couldn’t!” Which makes more friends, you think?

      I don’t think that such fake barriers are hard and fast rules in all times and all places. We just happen to see it in the EOL (English as an Only Language) World today.

      2.2 If the fat one is their daughter, no number of strangers will ever motivate them to push, and that’s a good thing. If their daughter is on the track, the fat guy is going off the bridge before they even consciously know what’s going on. As per 1, even if they know this reality, they can’t possibly say it out loud in modern times.

      Right.

      2.3 The calculation then becomes whether they want to manipulate the interviewer or not.

      I’m not sure “manipulation” is it, that’s too much power and control to ascribe to people who think of themselves as “nice,” as is likely to be the case in the University students interviewed for this study.

      This is a disempowering belief. Its purpose is to make it easier for people like me to manipulate you, among other flaws.

      Not sure I follow… The purpose of thinking that feelings are irrelevant makes it easier for others to manipulate me? I can’t say I ascribe much of any purposes to my thinking, or at least as little as possible. Perhaps you can expand.

      Yes, Roberts does manage to attract the odd erudite. As further proof, he’s had Taleb on the show a handful of times. Looks like Bueno de Mesquita was on the program in 06-08, a little before I started following it regularly, so you’ve given me some homework!

  5. Alrenous says:

    I don’t think that such fake barriers are hard and fast rules in all times and all places. We just happen to see it in the EOL (English as an Only Language) World today.

    Fair point. I should say ‘Westerners’ are supposed to have or fake not ‘humans.’ Now I want to take the trolley problem to various hunter tribes and probe their rules. Presumably one local equals infinite next tribe over…but what about local vs. local? Do the tribes differ in their attitudes?

    In any case, mainly I want to say the interviewer is being irrational, not the interviewees. The interviewees are failing at the logical consistency game only because they have to win at the making friends game, which they are doing rationally and effectively, if less than consciously.

    Perhaps you can expand.

    I use the ‘system 1′ vs ‘system 2′ convention.

    System 1 presents its conclusions to rational consciousness, system 2, in the form of emotions. They aren’t irrelevant, they are data about the world – in this case, less-conscious parts of your own brain.

    Certainly, all the pro-feels people misuse them as well. Much of the emotion-conclusions come properly tagged but many of them don’t. “I feel afraid,” doesn’t mean “there’s something to be afraid of.” However, system 1 responds reliably to specific situations with fear, indeed a wide variety of subtle variations on fear, each meaning it has detected a different kind of pattern. I learned what all the patterns are and it turns out system 1 has at least two standard deviations of epistemic competence on me. I am never right when it’s wrong, only the reverse. Occasionally I misinterpret the signal, that’s as far I get toward finding it wrong.

    As complex systems must essentially have fixation across a sexually-reproducing species, your system 1 almost certainly works the same way.

    To use the ‘reals not feels’ meme for manipulation, present solid-seeming statistics or other kind of logic for a false conclusion. (Sophistry.) System 1 will ring the alarm, which then has to be disarmed. This in particular is what I mean by my system 2 is never right when my system 1 is wrong – I frequently find myself knowing an assertion is incorrect but not knowing how I know. Most weeks, I’d say.

    I don’t know if you’ve read Yudkowsky, I find his thinking decidedly impure and much of it should be obvious, but he’s right about having beliefs pay rent. I use system 1’s superior intellect to plan my day. It pays rent.

    Side benefit: it’s hard to be impudent to epistemic superiors when I’m forced to take a stance of humility toward other parts of my own brain. I managed it sometimes anyway, but there you go…

    • mainly I want to say the interviewer is being irrational, not the interviewees.

      Ya, I can see that.

      I use the ‘system 1′ vs ‘system 2′ convention.
      System 1 presents its conclusions to rational consciousness, system 2, in the form of emotions. They aren’t irrelevant, they are data about the world – in this case, less-conscious parts of your own brain.

      I’m somewhat familiar with Kahneman’s paradigm though I wouldn’t call myself an adherent. I think there’s room for fast analytical reactions, perhaps even distilled into something as accessible as a heuristic, to be cleaved from emotions per se. Though perhaps he already allows for this in his theory?

      I learned what all the patterns are and it turns out system 1 has at least two standard deviations of epistemic competence on me. I am never right when it’s wrong, only the reverse. Occasionally I misinterpret the signal, that’s as far I get toward finding it wrong.

      Very interesting. What did this mapping process consist of? How long did it take until you were satisfied with it enough to put it into practise? Was it as formalised as you make it sound or is more a matter of attunement?

      As complex systems must essentially have fixation across a sexually-reproducing species, your system 1 almost certainly works the same way.

      Hmm. I think that there’s quite a lot of room for variance in cognition, particularly in a species that’s as reliant on it for survival as we are, and in such a broad range of natural environments, no less. To say that you and I experience similar types of intuitive reactions, given our widely different social and physical environments (probabilistically, if not specifically), is a stretch. If it weren’t, we might expect similar outcomes for people in similar environments, which is in no way the case in reality. As such, regardless of whatever fixation we might share, we don’t share lovers any more than we share mothers.

      my system 2 is never right when my system 1 is wrong

      This would seem to defeat the point of system 2, would it not? If the quicker system 1 is wrong, as it must frequently be because it only recognises patterns and is inclined to glaze over specifics, how can it be that the more reflective system 2 could not see its way through to the right answer? Perhaps I’m misunderstanding something here…

      I don’t know if you’ve read Yudkowsky, I find his thinking decidedly impure and much of it should be obvious, but he’s right about having beliefs pay rent.

      I haven’t read him but I’ll take a look. BTW I’m already quite enjoying Bueno de Mesquita!

      Side benefit: it’s hard to be impudent to epistemic superiors when I’m forced to take a stance of humility toward other parts of my own brain. I managed it sometimes anyway, but there you go…

      Ha! If only Reddit had such humility I’d have a much smaller collection of heads on spikes around my castle.

    • Alrenous says:

      What did this mapping process consist of?

      Whenever faced with a situation, I would get a system 1 read and a plan based on it. I would then get a system 2 read and a logical plan. I would then go through the situation and compare the plans using hindsight. When faced with apparently disparate situations that felt the same, it turned out I should react to them the same. It took some years to learn it all, I think it’s supposed to be mainly automatic and happen during childhood, but schooling and similar features of modern life intentionally interrupt it. See also the denigration by scientists of instinct and intuition. Learning what kind of foods you like and don’t like is one simple edge of the process.

      To say that you and I experience similar types of intuitive reactions

      To be more specific, I strongly suspect you have an intuition that isn’t fully tagged and therefore must be explored, and commands more neurons than your rational consciousness. All else is plastic.

      Perhaps I’m misunderstanding something here…

      You’re not entirely estranged from your intuition.
      A well-oiled system 1 remains silent whereof it cannot speak. More precisely, it answers, “Refer to system 2.” For example, mine doesn’t know math. (In my case, rather than oil or practice, it never spoke of which it did not know, but I took some time to learn the various flavours of ‘don’t know’ it offers, only one of which I originally recognized as such.)
      Further, system 1 can learn things from system 2. I taught mine all about logic. https://en.wikipedia.org/wiki/Chunking_(psychology) I get the benefits of speed without loss of attention to detail.

      Though perhaps he already allows for this in his theory?

      My theory isn’t entirely dependent on him, and if his doesn’t, then so much the worse for his theory. I’m mainly in it for the useful terminology.

    • It took some years to learn it all

      Now for a personal project with no possibility of external validation, that’s dedication!

      schooling and similar features of modern life intentionally interrupt it.

      Most certainly. Gotta stay busy, busy, busy!

      I strongly suspect you have an intuition that isn’t fully tagged and therefore must be explored

      This is accurate, but it’s not like I’m a fully-formed butterfly just yet!

      Further, system 1 can learn things from system 2. I taught mine all about logic.

      This sounds very much like a learned skepticism towards economists and other empty suited philosophasters. Not a bad thing to have at all!

    • Alrenous says:

      As far as I’m concerned, p(deserves_title_philosopher|philosophy_phd) ~= 0. Establishment philosophy respects Brian fucking Leiter for Gnon’s sake. Their respect is an insult. Land, obviously, can think in straight lines, and that’s pretty much it.

      Of course I’m a straight-up Cartesian dualist and, apparently, I re-derived logical positivism. If you know anything about mainstream philosophy, you know they hate these positions.

      But yeah, having logic on system 1 tap, is, if I’m not conflating it with something else, utterly awesome and definitely worth the effort.

    • Brian Leiter eh? Well, he is an American academic, so he sorta has to end up lumped in with the pseudoscience crowd.

      While I’ve studied a modest amount of philosophy, of which you’ll find liberal sprinklings on these pages, I can’t say I’ve ever fallen into the trap of “mainstream philosophy.” Sounds awful. As far as liberal arts theories go, I fully subscribe to the power of the Lindy effect.

  6. […] When I say that the Alberta government is “expecting” such and such in the future, it’s because they’ve made projections* based on the wise words of their ever-so-brilliant team of unbelievably well educated economists. Y’know, just like the Soviets. […]

  7. […] everest because reasons that don’t involve us being a plasticised two-bit knock-off of The Great Soviet Union, I mean how could you even suggest such a thing at a time like this when you know damn right that […]

  8. […] rooftop to rooftop for speedier trades, and, perhaps most importantly of all, 3) useless fucking Soviet-style nation states impotently diddling the market’s twat.v Quite the opposite, in […]

  9. […] a market isn’t : A system of centrally controlled trade and artificially constructed exchange for determining the “fairest” prices as a […]

  10. […] Ivy League institution represented by Joshua Greene, the human embodiment of the Soviet-Harvard illusion […]

  11. […] Spanning Nash’s life beginning with his entrance into Princeton in 1947, when the campus was but a few suit-clad young men casually smoking in class, playing football in quad, and solving centuries-old math conjectures with a grease pencil on the windows of the library, to 1994, when Nash was awarded the Nobel Prize for his development of the Nash Equilibrium, which had come to be widely used in the increasingly charlatanised and quantified field of economics in the interim. Unlike so goddam many of the universally acclaimed products and “solutions” of the 20th century, to say nothing of the atomic-powered IoT vapourware, the Nash Equilibrium, also known as the Nash Solution, is actually useful – that is, it actually applies outside of the conscripted confines of Soviet-Harvard acadaemia. […]

  12. […] a market isn’t : A system of centrally controlled trade and artificially constructed exchange for determining the “fairest” prices as a means […]

  13. […] plus an entire echafaudage supported by Nobel Prize winning “economists” and associated Harvardites. It’s quite the dog and pony show, really. […]

  14. […] archives of Contravex are ripe with the fruits of Russ’ weekly show, eg. Blinders ; Wences whacked, Xapo zapped ; It’s called the Soviet-HARVARD illusion for a reason and that reason is Joshua Greene, to […]

  15. […] dumb enough to attend Soviet-Harvard in the first place – and I say that in all seriousness as I really do think very little of […]

  16. […] Those unencumbered by individuation telling those imbued therewith that displays of power are “sociopathological”iii is beyond backwardsiv and can only be the result of a full generation of castrating and nigh-on-totalitarian socialism. […]

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>