24 January 2011

No Mind

Believe it or not, while reading the piece discussed below on "Situationalism" and Virtue Ethics I thought of that abominable recent flick The Green Hornet (avoid at all costs). I'll explain why...in due course.

In the Summer 2009 (link below) issue of Daedalus (unearthed happenstantially out of a box recently-unpacked in search of a Spore computer game disc) is an essay by Kwame Appiah discussing, primarily, ethics and the psychology of morality. It was a piece culled from his book Experiments in Ethics which deals with much that is happening in the neurosciences and cognitive psychology and how this might affect moral philosophy (hell, just morality in general I guess). It's a pretty interesting and easy piece to read.

I went looking for reviews of the Appiah book and there's a pretty good one in the NYRB by Jeremy Waldron (10/8/09--link below).

These are important issues in moral philosophy. Primarily it seems our brain does a lot of "work" that often has no relevant narrative applicable to it and our terms such as those we label "Virtues" may not be "agent-centered" or even "action-centered". Our brain commands us and we act and then we tell the tale of the action.

There is a strong "situationist" movement regarding Virtue that seems very interesting but also too readily "reductionist" in our easy conception of it. We act "situationally" and "virtue" is flexible, or situationally definable, for lack of a better term. But the act itself may in no way be "virtuous"--it may simply be the brain responding to other "unreflected upon" stimuli. This is akin to what I take to be the thrust of the Bronk poem posted yesterday, The Limitations of the Mind Are Its Freedom.

You know there are always messages we find
--in bed, on the street or anywhere, and the mind
invents a translation almost plausible;


The two examples discussed in the Appiah piece of experiments in "moral psychology" may be well known to you if 1) you listen to Radio Lab and/or 2) you listen to Philosophy Bites.

1) The "trolley car" dilemma and variants.
From the Wikipedia: taking a neuroscientific approach to the trolley problem, Joshua Greene[10] under Jonathan Cohen decided to examine the nature of brain response to moral and ethical conundra through the use of fMRI. In their more well-known experiments,[11] Greene and Cohen analyzed subjects' responses to the morality of responses in both the trolley problem involving a switch, and a footbridge scenario analogous to the fat man variation of the trolley problem. Their hypothesis suggested that encountering such conflicts evokes both a strong emotional response as well as a reasoned cognitive response that tend to oppose one another. From the fMRI results, they have found that situations highly evoking a more prominent emotional response such as the fat man variant would result in significantly higher brain activity in brain regions associated with response conflict. Meanwhile, more conflict-neutral scenarios, such as the relatively disaffected switch variant, would produce more activity in brain regions associated with higher cognitive functions. The potential ethical ideas being broached, then, revolve around the human capacity for rational justification of moral decision making.


My take on this regarding what this variant of the "fat man" ("why is the man fat, dad," asks my 11 year old..."that doesn't seem necessary.") speaks to a technology-induced disaffection. Pulling a lever to affect an outcome is mechanical and distant and does not engage your "feelings"; pushing a man indeed brings you into the equation full force and jumps out at us as VERY wrong EVEN if the Utility calculation is equal. The more we engage in "moral" conflict (modern war as video game) at a distance the more we are able to "de-personalize" these choices and create a world enacted out of "calculation" and do away with morality as a choice based in our notions of the "good" or what is a right/wrong action based on ethical principles (a reason to be against Utilitarianism and, hell, Pragmatism as well--at least as it's commonly defined).

And 2) I don't really want to talk about...sorry. But there is a link below and a YouTube video about this. However the point here is about "intentionality"--a "moral" issue regarding "harming" or "helping" as a secondary or "accidental" result of a primary intention which leads to an action (which may or may not be intrinsically moral or immoral but is then "colored" by the consequences of the act/intention).

Finally, what you've been waiting for--how in the world does any of this relate to that excrescence that is the Green Hornet movie? Only in this way--Cato "acts" in the movie in slow motion. All of his movements are a product of the stillness of his mental activity. He sees all and acts in accordance to the mind's dictates...Cato (of Asian descent and so a man of Eastern philosophy) doesn't act, his mind directs his motions on the "first order" while the "Western" rube that is Seth Rogan's character can only "see" what's in front of his face and "think" about it in a "second order" reflection. He is hampered by his "thinking I".

Appiah in Daedalus

Waldron NYRB

Quandaries and Virtues
Against Reductivism in Ethics
Edmund L. Pincoffs


RadioLab-Morality

Philosophy Bites Joshua Knobe


YouTube Experimental Philosophy--"intentionality"

War Games

1 comment:

  1. I meant to add too that the "data-point" experiments done may be interesting but don't they simply beg more questions, require more granularity?

    Appiah makes this point but further we might ask, why are we creating computer code scenarios that spit out computer code answers? Data in, Data out. Garbage in, Garbage out.

    ReplyDelete