I am through blogging. I appreciate our regular and intermittent readers over the years. The problems with the blog are ongoing and suggest to me it is time to quit. Keep up the good fight.
Best wishes, Patrick
I am through blogging. I appreciate our regular and intermittent readers over the years. The problems with the blog are ongoing and suggest to me it is time to quit. Keep up the good fight.
Best wishes, Patrick
Posted at 12:36 AM in Patrick S. O'Donnell | Permalink | Comments (0)
I apologize for the disappearing images and photos in some posts, including the last two. I have written to Typepad about this so hopefully it will be corrected soon. Thanks for your patience. Patrick
Posted at 03:04 PM in Patrick S. O'Donnell | Permalink | Comments (0)
I have arrived at the point where I can wholeheartedly (thus unreservedly) agree with prolific and brilliant philosopher Larry May’s statement that Thomas Hobbes is “arguably the greatest systematic philosopher to have written in the English language.” If that be too extravagant for you (e.g., those of you enamored with Hume), consider that Hobbes was “the first great philosopher to write in English.” May himself, by my lights, is our foremost philosopher of international criminal law and justice (he writes in other areas as well, as his works on shared intentionality, collective responsibility, and the morality of groups generally, attests).
More than a few respectable and well-known philosophers have made surprising mistakes and misleading if not simply incorrect interpretations of Hobbes’s ideas, however perhaps plausible and imaginative in construction. The views of these philosophers “ruled the roost” for the latter half of the twentieth century and beyond. Their portrait of his philosophical corpus long ago moved me to either ignore or dismiss Hobbes as no longer worth my attention, a portrait that happens to have been invoked and reproduced in whole or in part by philosophers and intellectuals who are not scholars of Hobbes by way of caricaturing his ideas such that he is a bête noire or strawman of many moral and political philosophers, political scientists, and their credulous students.
I am neither a Hobbes scholar nor a philosopher, but I’ll confess to having the chutzpah necessary to recommend a list of ten titles (books only) that reflect a rather different—more sophisticated, nuanced, and suggestive—picture of Hobbes’s moral and political philosophy than the one I learned at university (which was, more or less, the ‘traditional’ interpretation, although creatively enhanced by the works of ‘analytical philosophers’ in the mid- to late-twentieth century). A few of the titles below I have not read in toto but learned about them from others, their reference to and use of material from them allowing me to trust their judgment. I am sure I have not listed all the books that might meet the above criteria for inclusion but this is a good start, especially for those of us not specialists in this area but possessed of no less an avid or ardent interest in moral and political philosophy and Liberalism in particular.
Posted at 03:56 PM in Patrick S. O'Donnell | Permalink | Comments (0)
When Republicans run for public office, at whatever level of government, one simple analytical or interpretive method to employ with regard to their campaign ads and public rhetoric in whatever fora (e.g., in person or in social and mass media) is to compare what they say to what they do not say. This will allow you to see clearly that the GOP has become a regressive Manichaean political party of inchoate grievance, resentment, fear, anger, racism and rage. They are unable to proffer a coherent and thus plausible let alone positive political platform of public policies that address the sundry problems of our time and place: from climate change to gun violence, from a crumbling infrastructure to environmental degradation, from a deformed and neglected system of public education to racial segregation, from an indefensible military budget to an inexcusably inadequate public health system, and so forth and so on. The substance of their unhinged political ambition revolves around the tired, time-worn ineffective tropes of de-regulation and privatization. The authoritarian and fascist flavor of its rhetoric has resulted in systematically deleterious effects on our electoral system and public discourse, coinciding with a conservative majority on the Supreme Court contemptuous of defensible constitutional doctrines and blithely dismissive of hard-won democratic changes that have given substance to democratic liberties and aspirations. Republicans are shameless in relying on nostalgic myths of American exceptionalism and xenophobic “Christian” nationalism. The cult of Trump, be it its leaders (who are most culpable and blameworthy) or the led, are made up of “social characters” (in Erich Fromm’s sense1) who Thomas Hobbes long ago identified as “the Foole,” the “Dupe,” the “Zealot,” and the “Hypocrite” (the last aptly characterizes Republican leadership, being the ‘worst of the worst,’ and most vile of the vile;’ furthermore, the ‘Hypocrite pretends to believe what the Zealot believes, or what the Foole believes, but only in order to manipulate others in his grab for temporal power’).2
Notes
1. See the term, “social character,” in the respective indices of Daniel Burston’s The Legacy of Erich Fromm (Harvard University Press, 1991) and Kieran Durkin’s The Radical Humanism of Erich Fromm (Palgrave Macmillan, 2014).
2. See S.A. Lloyd’s superb discussion of these “civil characters” in the chapter, “Fools, Hypocrites, Zealots, and Dupes: Civic Character and Social Stability,” in her book, Morality in the Philosophy of Thomas Hobbes: Cases in the Law of Nature (Cambridge University Press, 2009): 295-355.
Posted at 06:17 AM in Patrick S. O'Donnell | Permalink | Comments (0)
While Romanticism as a social and cultural movement is sometimes (or often?) viewed historically as a non-rational or irrational or even supra-rational reaction to ideas about reason and science prominent in the European Enlightenment, looking back it seems, to me at least, to be in some respects in keeping with the Enlightenment insofar as it often complements various facets of same or fills out those forms of sense and sensibility that did not, at the time and for some time thereafter, receive their due attention and consideration (to cite but one example: sundry types of sociability, such as salons and reading societies). I thus prefer to view Romanticism as simply softening the harder or cruder edges of European rationalism (in the end, more continuity than difference, the latter being a necessary yet not sufficient condition of the former). That said, I agree with Raghavan Iyer that the
“Romantics sought refuge from industrialism [and ‘the ideological superstructure of bourgeois capitalism’] in art and it was natural, though sad, that their apotheosis—art as a basis of moral protest—should have ended up in almost religious worship of art for its own sake. As aesthetic standards were threatened by industrialism, there was an overcompensation in the tendency to judge religion, morals, and society by purely aesthetic standards.”
Here is the opening paragraph from Kwame Anthony Appiah’s enjoyable essay in the NYRB, “Symphilosophizing in Jena,” although of course I recommend the entire piece. This is followed by a brief note from yours truly.
“The cult of individuality was born amid a melding of minds. Meldings must be preceded by meetings, of course, and the meetings took place in Jena, a university town in the German duchy of Saxe-Weimar with a population of 4,500 or so. If Jena was small, the minds that gathered there in the last years of the eighteenth century were large, and included the most consequential poets, critics, and philosophers of the era. The sparks they threw out electrified the world.”
A brief note:
August Wilhelm Schlegel and his younger brother Friedrich (‘Fritz’) apparently coined the term “symphilosophy” (and thus symphilosophizing), what Appiah calls “communal cognition” in this review essay. I came across this concept once before, in an article by the late Hector-Neri Castañeda, “Philosophy as a Science and as a Worldview” (in Avner Cohen and Marcelo Dascal, eds., The Institution of Philosophy: A Discipline in Crisis?, 1989), although its conception is appears to be far less vague than its original meaning among the Romantic philosophers (I do not know if Castañeda was aware of its origin, although I would be surprised if he was not). Castañeda here explains how his methodological proposal for philosophers arrives at the state of “sym-philosophy,” which is the result of seeing the world in light of both an ontology and metaphysics that respects pluralism and relative epistemic perspectives, much like what we see in Jain philosophy, with its doctrines of anekāntavāda, syādvāda, and nayavāda (and, it seems, in the spirit of Paul Feyerabend’s ‘anything goes,’ which was not so much a principle as an attitude or approach forged in the fire of rhetorical polemics with Rationalist):
“Perhaps human-world reality is not a monolith, but a many-sided perspectival structure. Perhaps the greater understanding will be achieved by being able to see human reality now one way and now another way. Thus, we need ALL philosophical points of views to be developed, and ‘developed’ is meant in earnest: the more it illustrates the harmonious unison of the encompassing Forest Approach and the riches of the Bush Approach. Hence, all philosophers are part of one team collectively representing the totality of philosophical wisdom, and individually working the details a point of view: we are ALL parts of the same human project. Looking at things this way, we realize that we need not polemicize against the most fashionable views hoping to supplant them with our own view [emphasis added]. Instead, with a clear conscience, we may urge the defenders of those views to extend them, to consider further data to make them more and more comprehensive, pursuing the goal of maximal elucidation of the structure of experience and the world. At the same time we urge other philosophers to develop equally comprehensive views that are deliberately built as alternatives. The aim is to have ALL the possible most comprehensive master theories of world and experience.
To be sure, we cannot foretell that such a plurality of views as envisaged is ultimately feasible. But neither can we prove that in the end there must be just one total view, bound to overwhelm all others. If many master views are feasible, then the greatest philosophical illumination will consist alternatively to see reality through ALL those master views. It would be still true that the greatest philosophical light comes, so to speak, from the striking of theories against each other, but not in the destruction of one theory in the striking process, but rather in the complementary alternation among them. Each master theory would be like a pair of colored glasses with different patterns of magnification so that the same mosaic of reality can appear differently arranged [this calls to mind my youthful experimentation with psychedelics!]. Here Wittgenstein’s reflections on the duck-rabbit design are relevant. The different theories of the world give us different views, the rabbit, the duck, the deer, the tiger, and so on, all embedded in the design of reality. The analogy is lame on one crucial point: the master theories of the world and experience must be forged piecemeal: with an eye on the Bush Approach, patiently exegesizing the linguistic and phenomenological data, and with another eye on the Forest Approach, building the theoretical planks (axioms, principles, theses, rules) carefully and rigorously.” [….]
Among the consequences of “pluralistic meta-philosophy” noted by Castañeda is a “later stage in the development of philosophy” in which we will be rendered fit to engage in a “comparative study of master theories of the world and experience,” or what he terms “dia-philosophy.” In other words, our master theories of philosophical structures will be sufficiently rich and comprehensive for us to be able to articulate holistic and dia-philosophical critique: “compar[ing] two equally comprehensive theories catering to exactly the same rich collection of data, and, second, assess[ing] the compared theories in terms of their diverse illumination of the data.”
“The natural adversary attitude” will take the form of “criticisms across systems or theories,” but “not as refutations or strong objections, but as contributions of new data as formulations of hurdles for steady development.” Castañeda christens the development of master theories of the world and experience for dia-philosophical comparison “sym-philosophy:” “Thus the deeper sense in which ALL philosophers are members of one and the same team is the sense in which we are all sym-philosophers: playing our varied instruments in the production of the dia-philosophical symphony.”
Posted at 02:56 PM in Patrick S. O'Donnell | Permalink | Comments (0)
“Young adults in California experience alarming rates of anxiety and depression, poll finds”
Los Angeles Times, Sept. 30, 2022
By Paloma Esquivel
“Young adults in California experience mental health challenges at alarming rates, with more than three-quarters reporting anxiety in the last year, more than half reporting depression, 31% experiencing suicidal thinking and 16% self-harm, according to the results of a survey commissioned by the California Endowment. The numbers reflect a years-long trend of worsening mental health among young people that was exacerbated by the COVID-19 pandemic, experts say.
The poll of nearly 800 Californians ages 18 to 24 also found young people facing significant barriers to getting help — with nearly half of those who wanted to speak to a mental health professional saying they had been unable to do so, and many saying cost or lack of access had stopped them. [….]
The poll reveals a generation under strain from a wide range of problems, with 86% saying the cost of housing was an extremely or very serious problem and more than three-quarters saying the same about the cost of college, lack of well-paying jobs, homelessness, drug and alcohol abuse, and the cost and availability of healthcare.
Mental health ranked just behind the cost of housing as a widespread problem for young adults, with 82% calling it an extremely or very serious problem. When asked to pick a word that described how they felt about their generation’s future, the two dominant feelings were uncertainty and worry.
‘If we compare this to what we get when we talk to [older] adults, we don’t see the same breadth and intensity of concern about this wide range of issues,’ said pollster David Metz of the research firm Fairbank, Maslin, Maullin, Metz & Associates, which conducted the survey. ‘I think that says something about the burdens that young people are feeling.’” [….] The full article is here.
* * *
The findings reported here are part of a deleterious nation-wide trend in public mental health issues (so described, this includes physiological symptoms of various kinds in keeping with our knowledge of mind-body causal interactions), as U.S. Surgeon General Vivek H. Murthy warned at the end of last year. While just a hunch or suspicion (based on anecdotal evidence and trustworthy personal testimony), I’m inclined to believe the situation may be considerably worse than outlined in this article (one obvious symptom* that something is awry is the increasing rate of automobile accidents of late, which jibes with the countless daily stories from work and home of how horribly people are driving these days, including the widespread reports of ‘road rage’). Over fifty years ago, the psychoanalyst Erich Fromm wrote of the so-called normal person in contemporary society suffering from chronic low-grade schizophrenia marked by an inability to feel deeply, loneliness, anxiety, alienation and lack of creative activity. The epigraph to the Introduction to Gabor Maté’s (with Daniel Maté) latest book, The Myth of Normal: Trauma, Illness and Healing in a Toxic Culture (Avery, 2022) is fittingly, therefore, a well-known quote from the psychoanalyst Erich Fromm’s The Sane Society (1955):
“The fact that millions of people share the same vices does not make these vices virtues, the fact that they share so many errors [of formal and informal logic, reasoning, perception, cognition, etc.] does not make the errors to be truths, and the fact that millions of people share the same forms of mental pathology does not make these people sane.”
Now consider, if you will, the opening paragraphs of Maté’s book:
“In the most health-obsessed society ever, all is not well. Health and wellness have become a modern fixation. Multi-billion dollar industries bank on people’s ongoing investment—mental and emotional, not to mention financial—in endless quests to eat better, look younger, live longer, or feel livelier, or simply suffer fewer symptoms. We encounter would-be bombshells of ‘breaking health news’ on magazine covers, in TV news stories, omnipresent advertising, and the daily deluge of viral online content, all pushing this or that mode of self-betterment. We do our best to keep up: we take supplements, join yoga studios, serially switch diets, shell out for genetic testing, strategize to prevent cancer or dementia, and seek medical advice or alternative therapies for maladies of the body, psyche, and soul.
And yet collective health is deteriorating. What is happening? How are we to understand that in our modern world, at the pinnacle of medical ingenuity and sophistication, we are seeing more and more chronic physical disease as well as afflictions such as mental illness and addiction? Moreover, how is that we’re not more alarmed, if we notice at all [here is where problems of cognitive dissonance, self-deception, pernicious forms of wishful thinking, and denial come into the picture]? And how are we to find our way to preventing and healing the many ailments that assail us, even putting aside acute catastrophes such as the COVD-19 pandemic? [….]
I have come to believe that behind the entire epidemic of chronic afflictions, mental and physical, that beset our current moment, something is amiss in our culture itself, generating both the rash of ailments we are suffering and, crucially, the ideological blind spots that keep us from seeing our predicament clearly, the better to do something about it. These blind spots—prevalent throughout our culture but endemic to a tragic extent in my own profession [a health care system suffused with capitalist imperatives and distortions]—keep us ignorant of the connections that bind our health to our social-emotional lives [that is, our welfare, well-being, and potential or possibilities for individual and collective human fulfillment or happiness or eudaimonia].”
* For me at any rate, another and more all-pervasive symptom has to do with deteriorating adherence to minimal standards of etiquette, good manners, and social norms more generally.
Please see, in particular, the titles below for works that support or complement Maté’s arguments about the social and cultural sources of “the entire epidemic of chronic afflictions, mental and physical.”
And relevant material is found in these bibliographies:
Posted at 06:59 AM in Patrick S. O'Donnell | Permalink | Comments (0)
The Greek notion of eudaimonia, which is a richer and I think less ambiguous concept than our typical conceptions of happiness, is nevertheless sometimes translated as “happiness,” which is understandable to the extent happiness has been formulated by some philosophers in terms welfare and well-being. But as Martha Nussbaum writes in a note on the Greek word in The Fragility of Goodness: Luck and ethics in Greek tragedy and philosophy (Cambridge University Press, 1986):
“Especially given our Kantian and Utilitarian heritage in moral philosophy, in both parts of which ‘happiness’ is taken to be the name of a feeling of contentment or pleasure, and a view that makes happiness the supreme good is assumed to be, by definition, a view that gives supreme value to psychological states rather than to activities, this translation is badly misleading. To the Greeks, eudaimonia means something like ‘living a good life for a human being,’ or as a recent writer has suggested, ‘human flourishing.’ Aristotle tells us that it is equivalent, in ordinary discourse, to ‘living well and doing well.’ Most Greeks would understand eudaimonia to be something essentially active, of which praiseworthy activities are not just productive means, but actual constituent parts [this could include virtuous actions as ‘excellences’ of character]. It is possible for a Greek thinker to argue that eudaimonia is equivalent to a state of pleasure; to this extent activity is not a conceptual part of the notion. But even here we should be aware that many Greek thinkers conceive of pleasure as something active rather that stative; an equation of eudaimonia with pleasure might, then, not mean what we would expect it to mean in a Utilitarian writer.”
* * *
“It is philosophers who have the task of exploring what matters to us most—what is freedom? What is it genuinely for us to be happy? What is worth valuing and why?—but it is psychoanalysis that teaches us how we regularly get in the way of our own freedom, systematically make ourselves unhappy and use values for covert and malign purposes. Philosophy cannot live up to its task unless it takes these psychoanalytic challenges seriously.” — Jonathan Lear
“The parts of Freud’s writings that suggest some level of causal determination in fact coexist with his explicit view that one of the chief goals of psychoanalysis is to increase the patient’s ‘freedom’ (Freiheit), ‘autonomy’ (Selbstandigkeit), and ‘initiative’ (Initiative). Thus the aim of psychoanalysis is to ‘free’ (befrein) the patient from intrapsychic ‘chains’ (die Fesseln), which normally increases the patient’s ‘self-control’ (Selbstbeherrschung) and gives ‘the patient’s ego freedom to decide one way or the other’ between conflicting motives. For Freud, it is the mark of a relatively healthy ego to be able to deliberate and exercise self-control and willpower in choosing and pursuing goals. [….]
Freud’s claim that the developed ego is guided by qualitative hedonism helps to bring out just how in his late writings ‘the programme of the pleasure principle’ is compatible with non-egoistic, and hence, moral behavior. This compatibility is largely a consequence of the fact that happiness as Freud uses the term for the goal of life is a different kind of end then the quantitative one of maximizing a single kind of agreeable feeling. ‘Happiness’ in life is an ‘inclusive end’ rather than a single ‘dominant end.’ That is to say, the activities through which it is sought are not means in an instrumental or neutral sense, but parts of a whole. To pursue happiness as an inclusive goal through such activities as artistic creativity, intellectual work, sensuality, love, and aesthetic appreciation is to enjoy each of these activities as contributing something qualitatively unique to a life plan. Insofar as these activities are means, it is in the sense of being constitutive of the comprehensive end of happiness in life as a whole. It is only through such activities that genuine happiness in the sense of ‘positive fulfillment’ is possible [Here we see Freud’s conception of ‘happiness’ is close if not identical to the classical Greek concept of eudaimonia, or at least several well-known conceptions thereof and which we might translate in the best sense to mean or imply the possibility of human fulfillment, the triune nature of which arguably entails, minimally and broadly speaking, freedom (as self-determination), human community, and self-realization. The converse of such human fulfillment could be said to found in the several senses in which Marx employs the concept of alienation throughout his writings.] [....] Freud does not construe narrowly, then, the happiness at which the ego aims as always involving a self-interested goal. To the contrary, persons are observed to find pleasure in a whole range of activities, including fulfilling the needs of others, and even in moral conscientiousness. For there is ‘satisfaction’ to be obtained in acting benevolently in accordance with one’s ‘ego ideal’ and ‘a feeling of triumph when something in the ego coincides with the ego ideal.’”— Ernest Wallwork, Psychoanalysis and Ethics (Yale University Press, 1991)
* * *
We need not assume that freedom, at least insofar as it thought conducive to happiness, as a necessary yet not sufficient condition requires or implies “maximally unbounded and unburdened choice,” in correspondence with the classical Liberal belief that “people tend to fare best when they possess, more or less, the greatest possible freedom to live as they wish,” a belief Daniel N. Haybron views as central to “liberal optimism” or what I would term, perhaps more precisely, “libertarian (liberal) optimism.” This optimism, which rarely distinguishes between welfare, well-being and happiness, has an “elective affinity” with prominent features of capitalism wherein consumption (‘understood in a broad sense that includes aesthetic pleasures and entertainment as well as consumption of goods in the ordinary sense’) defines the best life for the individual and thus the more opportunities for and occasions of consumption are though to bring about pleasure and happiness. Liberal capitalism advocates the free choice of life-style, but “… forgets that the choice is to a large extent preempted by the social environment in which people grow up and live. These endogenously emerging preferences can well lead to choices whose ultimate outcome is avoidable ruin or misery. “
Please see Haybron’s The Pursuit of Unhappiness: The Elusive Psychology of Well-Being (Oxford University Press, 2008). For a taste of rather different understanding and interpretation of what freedom might or should entail, in other words, a conception that does not presuppose or assume the unconditional value of “maximally unbounded and unburdened choice.” See too Jon Elster’s Ulysses Unbound: Studies in Rationality, Precommitment, and Constraints (Cambridge University Press, 2000) and the essays collected in Jonardon Ganeri and Clare Carlisle, eds., Philosophy as Therapeia (Royal Institute of Philosophy Supplement: 66) (Cambridge University Press, 2010). On how this more modest (realistic?) conception of what freedom ideally entails is compatible with Marxist conceptions of self-realization and human fulfillment, see Elster’s article, “Self-realisation in work and politics: the Marxist conception of the good life,” in Jon Elster and Karl Ove Moene, eds., Alternatives to Capitalism (Cambridge University Press, 1989).
Whether one believes in religion or not, we are all seeking something better in life—the very notion of our life is toward happiness. — The Dalai Lama
Happiness, it is said, is seldom found by those who seek it, and never by those who seek it for themselves. — F. Emerson Andrews
Happiness is not a state to arrive at, but a manner of traveling. — Margaret Lee Runbeck
Most of us, I suspect, possess or cleave to a desire or wish to be happy. “Most of us” the requisite qualification because, in the words of the late Nel Noddings, “there are some gloomy souls who deny that happiness in our chief concern and claim something else as a greater good ….”1 We may even think our status as human beings or persons brings along with it (despite the awkwardness of the locution) a right to be happy, that the pursuit, as it were, of happiness is part and parcel of human nature or the human condition (and thus we may come to resent or at least get angry at those people or things we believe interfere with or are obstacles to our justified pursuit or just deserts). As for what happiness in fact is or consists of, we are not certain; at least we perhaps have only a dim conception, or a vaguely intuitive sense of what makes us happy. Happiness may embody or represent that which enhances our welfare and well-being, what makes life meaningful or brings self-fulfillment (or eudaimonia). We may want to define happiness so it is not circumscribed solely by “health, wealth, and the ups and downs of everyday life,” but the satisfaction of minimal criteria for material welfare and well-being may be a necessary yet not sufficient condition for happiness.2 Or we may be content to see happiness as simply the converse of suffering, of the absence of suffering (of various kinds). We may believe happiness is an occasional, momentary, episodic, indeed occurrent emotion or affective state (arising from more or less familiar gratifications or either well-known simple pleasures or complex pleasures).3 Or, we may view it more along the lines of a disposition (i.e., with a potential to be actualized), one that takes either the form of a mood (e.g., ‘he’s been rather happy of late’) or a general affective or even character trait, as in, “He’s such a happy person.” It might be the case that what we entertain as the “stuff” of happiness—no matter how assiduously, passionately, or obsessively entertained—is an illusion, a fantasy, an impossible desire or wish. What is perhaps more disturbing or frightening is to consider the possibility that the conscious wish or desire for happiness, to be happy, is routinely or ritually undermined by our unconscious, the appreciation or awareness of which may come to us all too late in life. More transparently, and in the words of the Dalai Lama, it may merely be the case that “we often employ misguided means in our attempts to be happy and wind up creating more causes for misery instead.”
In any case, it is likely that the desire or wish or intention to “be happy” is not one that can be directly sought, in other words, it is one with that class of mental states or states of affairs that cannot be (directly) willed, indeed, the deliberate pursuit of such states represents the irrationality, performative contradiction, or sheer folly of “willing what cannot be willed” (the late psychiatrist and psychoanalyst Leslie Farber). If that is true, and I lack sufficient reason to doubt it is not, then happiness is a welcome or unexpected by-product or side effect of some other action or activity, some other act or condition whose principal intention or motivation was not the desire or wish to be happy, even if happiness is somehow linked to or circuitously or indirectly caused by the original intention or primary motivation. Or it might be that we can only be happy when the desires or wish for happiness recedes, as it were, to the back of our minds, when we have forgotten the desire or wish to be happy, as we are no longer “self-conscious” about it, let alone obsessed with being happy (a Buddhist would put this in terms of not being ‘attached’ to happiness). Let’s assume for a moment that this is wrong, and thus that I’m able to deliberately pursue or perform or consume those (material or immaterial) things that bring me happiness, if only momentarily or episodically (this need not be simply the desire for instant gratification). In such cases, the happiness may itself be described as evanescent or degrading in intensity or quality over time, its character exemplifying what economists term “diminishing marginal utility,” its pleasures increasingly elusive to the point of vanishing altogether.
Whatever happiness is, it seems reasonable to hope that, at least in part, our happiness should be occasionally characterized as a consequence of seeing others happy, or of our ability to, as we say, make others happy. Put a bit differently, a portion of our happiness should be constituted by the joy or pleasure we find in witnessing the happiness of others, whether or not we have been complicit in or had anything to do with its outcome (which is verbally if not always honestly expressed when one says, ‘I’m so happy for you!’). See our last note below for this idea within Buddhism.
Notes
Further Reading: Chapter 15, “Caring about Oneself—Happiness and Sadness,” in Aaron Ben-Ze’ev, The Subtlety of Emotions (MIT Press, 2000): 449-472.
“As one writer insisted, ‘Don’t mistake pleasures for happiness. They’re a different breed of dog.’ Certainly, upon first reading, ‘pleasure’ may sound disreputable, whereas ‘happiness’ sounds morally acceptable, thanks in part to the famous, if ill-understood phrase in the Declaration of Independence guaranteeing the citizen’s right to ‘life, liberty, and the pursuit of happiness.’ Perhaps because of the Declaration’s lofty aspirations, few have troubled to think through the connotations attached to the words ‘happy’ and ‘happiness.’ True, there is an idea behind Bhutan’s Gross National Happiness index, that money can’t buy enough contentment, just as ideas spur UC-Berkeley’s ‘Science of Happiness’ project and the academic revival of interest in phenomenology. But evidently, the modern preoccupation with happiness is a symptom of the industrializing and industrialized world. To today’s smart thinkers who equate ‘happiness’ with ‘being satisfied with one’s life as a whole’ (a state of being, rather than a response to a given stimulus), the early thinkers who are my subject would have replied that seeking such an ambitious and unattainable goal would likely end in dissatisfaction and greater unhappiness. Indeed, sheer overreach may have brought about the current Euro-American paradox; ergo talk of happiness is most prevalent in the very populations heavily reliant upon anti-depressants and opiates. It is moreover hard to ignore the disturbing racist and culturalist overtones of recent ‘Happiness’ projects, lodged in the universalist presumption that all cultures everywhere have replicated the same set of emotions and emotional triggers as US citizens today. …[T]he vocabulary for several American virtues relating to happiness (the virtue of ‘cheerfulness,’ for example [I would add ‘positive thinking’ and being an ‘optimist’]) do not seem to exist in the classical writings in China, though an absence of literary evidence does not insure that cheerfulness was absent from daily life.
The Ancients with clear-eyed, even brutal frankness noted the manifold ills to which all people are prey: sickness, decrepitude, death, natural catastrophe, and slander among them. Fully cognizant of the level of destruction that ill luck, bad timing, or vengeful powers can wreak upon the innocent, the thinkers discussed in these pages advised followers to devise and adhere to programs and practices that promised a fair chance of shielding people, as much as humanly possible, form the worst slings and arrows of outrageous fortune. Each person, they said, can learn to give and take pleasure, despite the calamities that beset ordinary lives. In addition, those already favored by fortune may learn how to magnify their blessings. The closest counterpart to the modern tropes linking autonomy to pleasure is the continual profession by Chinese elites of their determination to avoid enslavement to other people and things. This admirable clarity about life’s constraints—so greatly at odds with American positive thinking—precludes mindless optimism. In consequence, no writings in classical Chinese, so far as I know, denote or connote ‘happiness’ either in its older Western sense of ‘favored by fortune’ or in its modern connotation of ‘a free state of blissful autonomy.’ The thinkers reviewed in this book would mock the fond hope that one can ‘stumble upon happiness.’” — Michael Nylan, The Chinese Pleasure Book (Zone Books, 2018)
Recommended
This post is freely available for viewing or download (as a pdf doc.) at my Academia site.
Posted at 12:07 PM in Patrick S. O'Donnell | Permalink | Comments (0)
If someone greets me with a nice smile, and expresses a genuinely friendly attitude, I appreciate it very much. Though I might not know that person, or even understand their language, my heart is instantly gladdened. On the other hand, if kindness is lacking, even in someone from my own culture whom I have known for many years, I feel it. Kindness and love, a real sense of sisterhood and brotherhood, these are very precious. They make community possible, and therefore are an essential part of any society. — His Holiness the 14th Dalai Lama, Tenzin Gyatso
It would be tedious and perhaps depressing to narrate the myriad instances of rudeness, inconsiderateness, and generally ill-mannered behavior that one comes across in daily life, at least in this country, and at least where we live. Of course, when people observe basic social norms, including norms of politeness and consideration, such behavior is unremarkable because expected and more or less routine, on the order of a secular ritual. It is only transgressions of norms of “civility” and good manners that remind us of the need and value of such things, their disappearance or decline prompting us to reflect on social and cultural order more generally. My wife and I find ourselves, nonetheless, sharing stories of these violations and the apparent corrosion of basic behavioral norms every time we venture outside the house (perhaps that makes us feel better about ourselves, varnishing our self-mages if you will). I often wonder if our observations and complaints are more than mere anecdotal evidence, perhaps attributable to us being in our mid-sixties, verging—charitably speaking—on being crotchety old folks that are the staple if not stereotypes of the sit-coms and movies of our generation. I do not doubt there are causal pathways and feedback loops that operate in the larger society involving mass and social media, including the disturbing behavior of an increasing proportion of politicians and public officials at all levels of politics. But that is a topic for another day and perhaps too obvious in cause and consequence to warrant further attention. And I’d rather not share the little things I try to do to by way of putting my finger in the proverbial dike (I may at the same time fantasize teaching the other person a lesson, or delivering a swift punch to the face, but the better side of me typically triumphs, at least as long as I’m sober), as these actions are not very imaginative or unusual but simply reflect the manner of my upbringing, including attendance at a Catholic school. All of that was of course reinforced in various social settings and circumstances (by our neighbors, my parents’ friends, and so forth).
By way of aiding our reflections on such things, I unreservedly recommend Amy Olberding’s book, The Wrong of Rudeness: Learning Modern Civility from Ancient Chinese Philosophy (Oxford University Press, 2019). Incidentally, it was from the late Professor Herbert Fingarette (while a teaching assistant in the 1980s for his introductory undergraduate course on Asian philosophies) that I first learned to appreciate the “wrong of rudeness,” in particular, from a title Professor Olberding lists in her “works cited,” Confucius: The Secular as Sacred (Harper Torchbooks, 1972). My “study guide” for Confucianism has some basic concepts introduced and discussed that may spark your interest as well (it is found under ‘teaching documents’ on my Academia page).
Posted at 03:09 PM in Patrick S. O'Donnell | Permalink | Comments (0)
Jacob Lawrence, The Library (1978)
Prelude
* * *
“In 1789 and for long afterward, in France and elsewhere, a single word often sufficed to explain the origins of the French Revolution: books. Just days after the fall of the Bastille, the radical journalist and politician Bertrand Barère wrote, ‘Books did it all. Books created opinion, books brought enlightenment down into all classes of society, books destroyed fanaticism and overthrew the prejudices that had subjugated us.’ Even counterrevolutionaries who saw the Revolution as a catastrophe agreed with Barère as to its cause: it was books or, sometimes, ‘philosophy’—by which they meant the great movement of ideas we now call the Enlightenment.
No serious historian today would attribute this kind of power to books alone. We recognize that the French Revolution occurred for many reasons. But no serious historian today would discount the importance of books either. This revolution, one of the greatest in history, did not just replace one king with another. It attempted to establish an entirely new regime, grounded in principles that authors had widely discussed before 1789: equality before the law, the sovereignty of the nation, religious toleration, the ‘rights of man.’ Books mattered.”—David A. Bell, “From Readers to Revolutionaries,” The New York Review of Books, June 27, 2019
What follows below is from an article “on the working class and the meaning of education,” Kenan Malik’s column in the Observer last month (31 July): “If education is all about getting a job, the humanities are left just to the rich.” It so happens that there is an equally cogent and compelling review essay at Jacobin by Kieron Monks on a new biography on C.L.R. James that in several respects complements Malik’s piece: “C. L. R. James loved seeing workers take ownership of culture that was thought to belong to their betters. He believed in the creative talents of workers, just as he believed in their power to secure their own liberation without direction from above.” In Monks’ words, James
“took satisfaction from seeing the proletariat take ownership of cultural forms thought to belong to their betters, from West Indian steel bands playing Rimsky-Korsakov to young cricketers from the barrack yards beating the English at their own game. James believed in the creative talents of the working classes just as he believed in their power to secure their own liberation without direction from above, a view that informed his rejection of Stalinism and the Communist Party. He idealized the direct democracy of ancient Greece as the most perfect system. [….]
And now Malik:
‘We rented a garret, for which we paid (I think) 25s a year, bought a few second-hand forms and desks, borrowed a few chairs from the people in the house, bought a shilling’s worth of coals… and started our college.’
So remembered Joseph Greenwood, a cloth cutter in a West Yorkshire mill, about how, in 1860, he helped set up Culloden College, one of hundreds of working-class mutual improvement societies in 19th-century Britain. “We had no men of position or education connected with us,” he added, “but several of the students who had made special study of some particular subject were appointed teachers, so that the teacher of one class might be a pupil in another.”
Greenwood’s story is one of many told by Jonathan Rose in his classic The Intellectual Life of the British Working Classes, a magnificent history of the struggles of working people to educate themselves, from early autodidactism to the Workers’ Educational Association. For those within this tradition the significance of education was not simply in providing the means to a better job but in allowing for new ways of thinking.
‘Books to me became symbols of social revolution,’ observed James Clunie, a house painter who became the Labour MP for Dunfermline in the 1950s. ‘The miner was no longer the “hewer of wood and the drawer of water” but became … a leader in his own right, advocate, writer, the equal of men.’ By the time that Rose published his book in 2001, that tradition had largely ebbed away. And, in the two decades since, so has the sense of education as a means of expanding one’s mind.
Last week, Roehampton University, in south-west London, confirmed that it is going to fire and rehire half its academic workforce and sack at least 65. Nineteen courses, including classics and anthropology, are likely to be closed. It wants to concentrate more on ‘career-focused’ learning. It is the latest in a series of cuts to the humanities made by British universities, from languages at Aston to English literature at Sheffield Hallam. These cuts mark a transformation in the role of universities that is rooted in three trends: the introduction of the market into higher education; a view of students as consumers; and an instrumental attitude to knowledge.
The 1963 Robbins report into British higher education argued for expansion of universities on the grounds that learning was a good in itself. ‘The search for truth is an essential function of the institutions of higher education,’ it observed, ‘and the process of education is itself most vital when it partakes in the nature of discovery.’ The 2010 Browne report on the funding of higher education took a very different approach, viewing the significance of universities as primarily economic. ‘Higher education matters,’ it insisted, because it allows students to find employment with ‘higher wages and better job satisfaction’ and ‘helps produce economic growth.’
The utilitarian view of education is often presented as a means of advancing working-class students by training them for the job market. What it actually does is tell working-class students to study whatever best fits them for their station in life. So, philosophy, history and literature increasingly become the playthings of the rich and privileged.
There is another way, too, in which the relationship between the working class and education has changed. A report last week from the think-tank the IPPR revealed the paucity of diversity among MPs, a subject of much debate recently. The IPPR says there is a 5% ‘representation gap’ on ethnicity – 10% of MPs have a minority background compared to 15% of the general populace. For women, the gap between prevalence in the population and in parliament is 17% and for the working class it is 27%. The biggest gap, however, comes with education – 86% of MPs have been to university compared to 34% of the population at large. The cleavage between voters and those who govern them is expressed through the class divide but even more so through the education gap. The proportion of women and minority MPs has increased over the past 30 years while that of working-class MPs has fallen dramatically. In the 1987-92 parliament, 28% of Labour MPs had a manufacturing, manual or unskilled job before entering parliament. By 2010, that was 10%, rising to 13% for the 2019 intake. For Tories, unsurprisingly, the figure was consistently below 5% and fell to just 1% in 2019.
Part of the reason for the decline in working-class MPs is that the institutions that gave workers a public platform, in particular trade unions, have waned. The RMT’s Mick Lynch, and his success in defending workers’ rights, has caught the public imagination. Fifty years ago, there were many Mick Lynches because the working class was more central to political life.
At the same time, education has become a marker of social difference in a novel way. As Western societies have become more technocratic, so there has developed, in the words of the political scientist David Runciman, ‘a new class of experts, for whom education is a prerequisite of entry into the elite’ – bankers, lawyers, doctors, civil servants, pundits, academics. The real educational divide is not ‘between knowledge and ignorance’ but ‘a clash between one world view and another.’ So, education has become a marker of the Brexit divide.
All this has led some to claim education, not class, is Britain’s real political divide. It isn’t. Education is, rather, both one of the most significant expressions of the class divide and a means of obscuring it. ‘If there is one man in the world who needs knowledge,’ wrote the Durham collier Jack Lawson in 1932, ‘it is he who does the world’s most needful work and gets the least return.’ That is as true today as it was 90 years ago.”
Relevant Bibliographies
Posted at 05:17 AM in Patrick S. O'Donnell | Permalink | Comments (0)
I routinely check out the blog posts at Dorf on Law, and this morning I came across something completely unexpected (although her illness was known to others) and heartbreaking news: Professor Dorf’s “co-blogger, co-author, colleague, best friend, and wife for over 31 years—died this morning.” Here is the notice at Cornell Law School where Sherry F. Colb was the C.S. Wong Professor of Law. In the words of Bridget Crawford at The Faculty Lounge, “Professor Colb will be missed by so many. May her memory be a blessing.” Speaking for myself and on behalf of my co-bloggers, we offer our deep condolences to Michael C. Dorf and his family.
Update: For links to one of Professor Colb’s books as well as her work available online, please see here and here at Larry Solum’s Legal Theory blog.
Posted at 01:10 PM in Patrick S. O'Donnell | Permalink | Comments (0)
Arguments are effective as weapons only if they are logically cogent, and if they are so they reveal connexions, the disclosure of which is not the less necessary to the discovery of truth for being also handy in the discomfiture of opponent. — Gilbert Ryle (qtd. in Garver below)
The title of my latest bibliography is (yes, it’s a mouthful) “Toward Assessing the Apparent Imperatives and Possible Constraints of Digital Media and Artificial Intelligence (AI): communication, speech, and rhetoric in a fragile technocratic capitalist and constitutional democracy.” Whenever I compose these lists, I try to read a substantial amount of what appears by my lights to be the crème-de-la-crème of the literature in English. Hence this quote from Eugene Garver’s indispensable work, Aristotle’s Rhetoric: An Art of Character (University of Chicago Press, 1994):
“Reasoning does not automatically persuade, yet, if artful rhetoric [which speaks to or implies character] is argumentative, nothing but reason persuades.”
As Garver elsewhere explains, it is “obvious why character or ēthos persuades: the end of rhetoric is belief or trust, and belief and trust attach primarily to people whom we trust, and only derivatively to propositions which we believe.”
Garver now states something which I suspect rubs more than a few professional philosophers the wrong way insofar as they hope or believe that sound, valid, or even quite plausible (i.e., coherent or well-formed) arguments should have some corresponding measure of persuasive capacity (I am not saying they fail to distinguish valid or sound arguments from persuasive ones, only that there is a tendency to think the former has some necessary connection to the latter):
“There is no reason to expect a correlation between degrees of persuasiveness and degrees of validity or coherence, or any other strictly logical value, as there is for ēthos and pathos [emotion or passion, for Aristotle, with regard to emotions in the Rhetoric, did not refer to any or all emotions but certain emotions in particular].”
There is of course more to this argument but suffice here to say that perhaps philosophers who profess belief in the values and purposes intrinsic to democracy (which has of late exposed its weaknesses and fragility in this country), especially those keenly interested in moral psychology and ethics, “public reason” (John Dewey, John Rawls, Gerald Gaus, Amartya Sen, et al.) and the various forms of communication in a—participatory, deliberative and representative—democracy, will come to appreciate and thus help cultivate the possibilities for and the various fora of morally and politically responsible public rhetoric.
Posted at 03:00 PM in Patrick S. O'Donnell | Permalink | Comments (2)
I’ve put together a short reading list that speaks to questions that arise from “democracy and digital technology” (democracy in theory and practice, and in participatory, deliberative, and representative modes, all of which are essential to ‘open democracy’ in Landemore’s sense). It is intended to be part of a forthcoming larger list that will include material on “artificial intelligence” (AI) while widening the scope to address issues that arise within a democratic society that in significant and often insidious ways is malformed by capitalist political economy (as evidenced in our earlier post on Zephyr Teachout’s NYRB article). It will also contain works more philosophically and ethically oriented (reflecting my critique—some of which is here—of the many extravagant claims often made on behalf of AI):
Please note: relevant bibliographies are appended to our prior post.
Posted at 06:17 AM in Patrick S. O'Donnell | Permalink | Comments (0)
The following is from selected parts of Zephyr Teachout’s “The Boss Will See You Now” (New York Review of Books, Aug. 18, 2022), a review essay of four recent titles about digital surveillance, tracking, and performance monitoring of millions of workers in an affluent capitalist and deeply inegalitarian society (conditions that amount to what Elizabeth Anderson terms ‘private government’).
With wholly atomized workers, discouraged from connecting with one another but forced to offer a full, private portrait of themselves to their bosses, I cannot imagine a democracy.
* * *
[….] “As it happened, the 1980s and 1990s were a major turning point in surveillance, the period when companies went on their first buying sprees for electronic performance-monitoring. In 1987 approximately six million workers were watched in some kind of mediated way, generally a video camera or audio recorder; by 1994, roughly one in seven American workers, about 20 million, was being electronically tracked at work. The numbers steadily increased from there. When videotape technology was supplanted by digital devices that could scan multiple locations at once, the cameras first installed to protect businesses from theft shifted their insatiable gaze from the merchandise to the workers.
The second big turning point in electronic performance-monitoring is happening right now. It’s driven by wearable tech, artificial intelligence, and Covid. Corporations’ use of surveillance software increased by 50 percent in 2020, the first year of the pandemic, according to some estimates, and has continued to grow.
This new tracking technology is ubiquitous and intrusive. Companies track for security, for efficiency, and because they can. They inspect and preserve and analyze movements, conversations, social connections, and affect. If the first surveillance expansion was a territorial grab, asserting authority over the whole person at work, the second is like fracking the land. It is changing the structural composition of how humans relate to one another and to themselves.
Some long-haul truckers have to drive a fifty-foot flatbed truck six hundred miles a day with a video camera staring them down the entire time, watching their eyes, their knuckles, their twitches, their whistles, their neck movements. Imagine living in front of that nosy boss-face camera for months on end as it scans your cab, which serves as your home most of the time. On one of many angry Reddit forums about driver-facing cameras, a trucker wrote that he’d put up with one only ‘if the company owner gives me a 24/7 unrestricted stream in to his house.’ ‘Those few hundred miles a day is the only time I completely have to myself and I feel as if it is being tainted,’ added another. ‘I just want to pick my nose and scratch my balls in peace man.’ A bus driver described the human desire to ‘pull a weird face or talk to yourself or sing along with a song…. I could feel how much less cortisol was flowing through my body in my second job where the buses were older and did not have cameras inside. It makes you unhealthy and run down.’
Employers read employees’ e-mails, track their Internet use, and listen to their conversations. Nurses and warehouse workers are forced to wear ID badges, wristbands, or clothing with chips that track their movements, measuring steps and comparing them to coworkers’ and the steps taken yesterday. The wristbands that now commonly encircle your skin, caressing your median nerve, might in the future be used to send signals back to you or your employer, measuring how many minutes you spend in the bathroom. Amazon, which minutely tracks every moment of a warehouse worker’s activity, every pause and conversation, has a patent for a wristband that would, the Times reported, ‘emit ultrasonic sound pulses and radio transmissions to track where an employee’s hands were in relation to inventory bins’ and then vibrate to steer the worker toward the correct bin. A ‘SmartCap’ used in trucking monitors brainwaves for weariness.
Off-the-shelf human resources software can monitor workers’ tone of voice. One major firm, Cogito, touts its product as ‘the AI-informed coach [that] augments humans through live, in-call voice analysis and feedback.’ While workers are making fifteen dollars an hour fielding angry consumer complaints in a cubicle, they must pay heed to a pop-up screen that starts flashing if they talk too fast, if there is overlap between their voice and the customer’s voice, or if a pause is too long. ‘Empathy at scale,’ the company boasts.
In one sense, intimately tracking behavior is old news: the business model of tech companies like Facebook and Google, after all, relies on tracking users on- and off-site. The commodification of data is in its third decade. But surveillance and automatic management at work are different. Workers can’t opt out without losing their jobs: you can’t turn off the camera in the truck if doing so goes against company policy; you can’t rip the recording device off your ID card. And worker surveillance comes with a powerful implicit threat: if the company notices too much fatigue, you might get overlooked for a promotion. If it overhears something it doesn’t like, you could get fired.
The political implications of ubiquitous employment surveillance are monumental. While bosses always listened in on worker conversations, they could only listen rarely—anything more was logistically impossible. Not now. Employees have to assume that everything they say can be recorded. What does it mean when all the words, and the tone of those words, might be replayed? Whispering has lost its power.
In many cases, worker surveillance is installed for ostensible safety reasons, like the thermal cameras installed to protect customers and coworkers from a worker who has a fever. But it is not, it turns out, good for our well-being. Electronic surveillance puts the body of the tracked person in a state of perpetual hypervigilance, which is particularly bad for health—and worse when accompanied by powerlessness. Employees who know they are being monitored can become anxious, worn down, extremely tense, and angry. Monitoring causes a release of stress chemicals and keeps them flowing, which can aggravate heart problems. It can lead to mood disturbances, hyperventilation, and depression. Business professors from Cornell and McMaster Universities recently conducted a survey of electronic monitoring in call centers and showed that the stress it caused was as great as the stress caused by abusive customers. Workers felt that monitoring was used for discipline, not improvement, and that the expectations were unreasonable and the use of monitoring unfair. They preferred a human boss to an ever-present robot spy with the power to affect their paychecks.
Is it any surprise that truckers’ mental health is suffering? Or that call center employees are breaking down? Truckers and call center workers report a kind of destabilizing fog, a constant layer of uncertainty and paranoia: which hand gesture, which bathroom break, which conversation was it that caused me to lose that bonus? ‘I know we’re on a job, but, I mean, I’m afraid to scratch my nose,’ an Amazon driver told Insider for a story about the company’s driver-facing cameras. She didn’t share her name for fear of reprisal. [….]
All of this is demoralizing and dystopian, but what does it have to do with democracy? Elizabeth Anderson’s lively and persuasive 2017 book, Private Government, offers a partial answer. Anderson, a political philosopher at the University of Michigan, shakes the reader by the shoulders to get us out of the strange rigidity that pervades public discussion of government. Employment is a form of government, she argues, one that is far more relevant and immediate for most people than the Washington, D.C., kind.
A powerful company like Amazon, for instance, sets its own terms of employment—and in so doing impacts those of UPS drivers and the broader logistics industry. Private employers with industry-wide influence have coercive power—what Anderson calls governing power. Private government, personified by private guilds or by state-sanctioned economic monopolies in soap, salt, and leather, was the central target of intellectuals and activists like John Locke and the Levellers. Anderson sees in Locke, Adam Smith, and others a belief that the arbitrary power to debase and discipline is a threat to a free society, wherever it appears, and that public, accountable government should protect against private tyranny.
Many modern ‘thinkers and politicians,’ she argues, are ‘like those patients who cannot perceive one-half of their bodies:’ they ‘cannot perceive half of the economy: they cannot perceive the half that takes place beyond the market, after the employment contract is accepted.’ As a result, companies are generally treated as wholly private. Many private-sector workers, Anderson writes, live under ‘dictatorships in their work lives. Usually, those dictatorships have the legal authority to regulate workers’ off-hour lives as well—their political activities, speech, choice of sexual partner, use of recreational drugs, alcohol, smoking, and exercise.’
For her, service workers who clock out, or technicians and realtors and cooks who seem endowed with substantial freedoms, are burdened by a legal system that allows corporations to fire a worker based on off-the-clock activity. The speech rights of workers are practically nonexistent except as they explicitly relate to labor organizing, which, Anderson argues, is effectively a dead letter these days because of the difficulty of enforcement and the fear of challenging the boss’s tactics.
How did things get so bad? Anderson believes the root issues that enabled the current dystopian workplace go back generations. When the Industrial Revolution shifted the ‘primary site of paid work from the household to the factory,’ it imported the long tradition of wholly arbitrary power within the household, in which children did not have freedom vis-à-vis their parents, and wives had limited freedom vis-à-vis their spouses. The Industrial Revolution could have provided an escape from the private tyrannies of home life, but instead it replicated them. During the heyday of the Ford Motor Company, its Sociological Department began inspecting workers’ homes. Anderson writes: ‘Workers were eligible for Ford’s famous $5 daily wage only if they kept their homes clean, ate diets deemed healthy, abstained from drinking, used the bathtub appropriately, did not take in boarders, avoided spending too much on foreign relatives, and were assimilated to American cultural norms.’
Anderson points out that while Apple does not visit people’s homes today, it does require retail workers to open their bags for inspections before coming into work. We take this for granted, she notes, but should we? Nearly half of Americans have undergone a suspicion-less drug test. And many workers have no protection from getting fired for what they say on social media. To those who claim the workplace isn’t government because you can quit, Anderson retorts, ‘This is like saying Mussolini wasn’t a dictator, because Italians could emigrate.’
Anderson isn’t focused on surveillance, but her work suggests two things. First, that to address the constant spying, we should focus on power, not just the technology. Labor rights and antitrust enforcement must be first-level responses to the current—and worsening—structures of power. Second, we should treat employer surveillance as we do any governmental surveillance—in other words, with deep suspicion. It is a truism that governmental surveillance chills speech and debate and erodes the public sphere; once we can perceive the workplace as a site of government, we can perhaps build a political movement for greater freedom in the places where working Americans spend most of their waking hours.
To make sense of the reality we are in, we need to be able to talk to one another without fear of our conversations being used against us. The private conversations among workers—and friendships, debates, questions—are part of the cohesion and connection that enables not just labor organizing but public life. When everything we say is being listened to—especially by a smaller and more powerful cadre of employers—it can become easier not to speak. This is not unlike the political totalitarianism that Hannah Arendt warned against, where the state aims to disintegrate both the private and the public by submerging the private into the public and then controlling the public. The logical conclusion of workplace surveillance is that the private sphere ceases to exist at home because it ceases to exist at work, where visibility into the worker’s life is unrestrained. [….]
It is no coincidence that routine work surveillance followed closely on the heels of the Reagan antitrust revolution and the collapse of private sector unionization. Nothing except unionization or new laws would stop an employer from taking all the data it is gathering from sensors and recordings and using them to more precisely adjust wages, until each worker gets the lowest wage at which they are willing to work, and all workers live in fear of retaliation. This is no more sci-fi than Facebook and Google serving users individualized content and ads designed to keep us on their services.
Tracking technology may be marketed as tools to protect people, but will end up being used to identify with precision how little each worker is willing to make. It will be used to depress wages and also kill the camaraderie that precedes unionization by making it harder to connect with other workers, poisoning the community that enables democratic debate. It will be used to disrupt solidarity by paying workers differently. And it will lead to anxiety and fear permeating more workplaces, as the fog of not knowing why you got a bonus or demotion shapes the day.
This matters because work is not an afterthought for democratic society; the relationships built at work are an essential building block. With wholly atomized workers, discouraged from connecting with one another but forced to offer a full, private portrait of themselves to their bosses, I cannot imagine a democracy.”
Relevant Bibliographies
Image found here.
Posted at 04:48 AM in Patrick S. O'Donnell | Permalink | Comments (0)
Reflections on the attempted murder of Salman Rushdie as it relates to widespread perceptions and characterizations of, and generalizations about, “Islam.” See too this statement from Suzanne Nossel of PEN America. Update: Nossel’s opinion piece for The Guardian.
Perhaps not surprisingly, more than a few folks on the internet are using the occasion of the assault/attempted murder on Salman Rushdie in New York to spread or reinforce rash or hasty generalizations (and the informal fallacy of ‘converse accident’) that serve to confirm their ignorance of Islamic traditions and embolden their biases and prejudices about “Islam” while also indulging in a kind of virtue signaling that brings virtual applause in its wake. These generalizations concern “Islam” as a religious worldview and philosophy as well as those who are self-described or identified as Muslims. There is nothing peculiar about Islam vis-à-vis other religious worldviews and ideologies or other secular worldviews and ideologies that stands apart in this case provided we understand that individuals, groups, movements, sects, authorities, institutions, and so forth will often invoke their worldviews or ideologies to motivate, justify or excuse actions that we generally hold to be immoral and criminal. In other words, these ideologies and worldviews can be used for good or ill, rightly or wrongly, for peace or violence, and so forth: there is a spectrum of reasonable and rational rationalizations at one end, and irrational, immoral, or perverse rationalizations at the other end (‘rationalization’ in a pejorative sense).
To be sure, some ideologies are morally problematic ab initio, so to speak, such as fascism or Stalinism or Maoism. At other times and places, religious and secular worldviews (examples of the former: Poland, India, the U.S., Sri Lanka, Myanmar, etc.; …of the latter: Party-State Socialism /Communism, Nazism, etc.) become paired with forms of nationalism, and in such cases we find religious beliefs and identity harnessed toward non-religious, non-spiritual and immoral or unethical goals or ends (and thus we have a manifest contradiction between means and ends, wherein there is a failure to recognize the choice of means can distort or destroy otherwise justifiable ends). And of course in the history of religious and secular worldviews and ideologies, we are likely to find occasional currents, periods, places, etc. in which what occurs in the real world, on the ground as we say, bears little or no relation to a normative model or picture of the principal beliefs and values and virtues found in these worldviews. It is simply unavailing and uninformative to refer to these ideologies and worldview in toto without the necessary distinctions and specifications that history and the social sciences can provide alongside the relevant immediate conditions and circumstances that allow us to make sense of an action or event. In short, there is nothing significant or important or meaningful we can infer about “Islam” as such from the stabbing of Salman Rushdie. And nothing I have said diminishes our horror and disgust over this egregiously immoral and criminal act of personal violence.
Alas, fatwas have become a fairly indiscriminate legal category, with opposing religious authorities and jurists formulating fatwas that suit their interests and the majority of Muslims are left to their own devices, so to speak, when it comes to acknowledging or assessing their validity or significance. The Wikipedia entry has some excellent discussion of this in the latter half of the article. The young men (and occasionally women) that resort to these acts of violence around the world suggest, it seems to me, that this is not, strictly speaking or peculiarly, about religion, but about their sense of individual and collective identity, moral psychology, recognition and self-worth, in which case one can find justification, warrant, rationalizations, what have you in any number of ideologies and worldviews, secular or religious, political or nationalist, for such acts of violence (hence the many mass shootings in this country). On occasion of course, ideologies and worldviews appear to be altogether missing from the motivational structure, in which case one looks closer to home and upbringing for causal variables (some grievance, slight, disrespect, abuse, and so forth). This is not to say some fatwa or prospect of reward played no role whatsoever in the attempted murder of Rushdie, but we need to account for why some individuals choose to go this route while many others would never even consider doing such things: what sorts of individuals are readily moved to the tipping point?
* * *
As for fatwas [fatāwā] (an incredibly complex topic in Islamic jurisprudence), this snippet from the Wikipedia entry on same is informative:
[….] “In Iran, Ayatollah Khomeini used proclamations and fatwas to introduce and legitimize a number of institutions, including the Council of the Islamic Revolution and the Iranian Parliament. Khomeini’s most publicized fatwa was the proclamation condemning Salman Rushdie to death for his novel The Satanic Verses. Khomeini himself did not call this proclamation a fatwa, and some scholars have argued that it did not qualify as one, since in Islamic legal theory only a court can decide whether an accused is guilty. However, after the proclamation was presented as a fatwa in Western press, this characterization was widely accepted by both its critics and its supporters, and the Rushdie Affair is credited with bringing the institution of fatwa to world attention. Together with later militant fatwas, it has contributed to the popular misconception of the fatwa as a religious death warrant.
Many militant and reform movements in modern times have disseminated fatwas issued by individuals who do not possess the qualifications traditionally required of a mufti. A famous example is the fatwa issued in 1998 by Osama bin Laden and four of his associates, proclaiming ‘jihad against Jews and Crusaders’ and calling for killing of American civilians. In addition to denouncing its content, many Islamic jurists stressed that bin Laden was not qualified to either issue a fatwa or declare a jihad. The Amman Message was a statement, signed in 2005 in Jordan by nearly 200 prominent Islamic jurists, which served as a ‘counter-fatwa’ against a widespread use of takfir (excommunication) by jihadist groups to justify jihad against rulers of Muslim-majority countries. The Amman Message recognized eight legitimate schools of Islamic law and prohibited declarations of apostasy against them. The statement also asserted that fatwas can be issued only by properly trained muftis, thereby seeking to delegitimize fatwas issued by militants who lack the requisite qualifications. Erroneous and sometimes bizarre fatwas issued by unqualified or eccentric individuals in recent times have sometimes given rise to complaints about a ‘chaos’ in the modern practice of ifta [to give an (often legal or religious) explanation].
In the aftermath of the September 11, 2001, attacks, a group of Middle Eastern Islamic scholars issued a fatwa permitting Muslims serving in the U.S. army to participate in military action against Muslim countries, in response to a query from a U.S. Army Muslim chaplain. This fatwa illustrated two increasingly widespread practices. First, it drew directly on the Quran and hadith without referencing the body of jurisprudence from any of the traditional schools of Islamic law. Secondly, questions from Western Muslims directed to muftis in Muslim-majority countries have become increasingly common, as about one-third of Muslims now live in Muslim-minority countries.
Institutions devoted specifically to issuing fatwas to Western Muslims have been established in the West, including the Fiqh Council of North America (FCNA, founded in 1986) and the European Council for Fatwa and Research (ECFR, founded in 1997). These organizations aim to provide fatwas that address the concerns of Muslim minorities, helping them to comply with sharia, while stressing compatibility of Islam with diverse modern contexts. The FCNA was founded with the goal of developing legal methodologies for adopting Islamic law to life in the West. The ECRF draws on all major schools of Sunni law as well as other traditional legal principles, such as concern for the public good, local custom, and the prevention of harm, to derive fatwas suitable for life in Europe. For example, a 2001 ECRF ruling allowed a woman who had converted to Islam to remain married without requiring her husband's conversion, based in part on the existence of European laws and customs under which women are guaranteed the freedom of religion. Rulings of this kind have been welcomed by some, but also criticized by others as being overly eclectic in legal methodology and having potential to negatively impact the interpretation of sharia in Muslim-majority countries.
The needs of Western Muslims have given rise to a new branch of Islamic jurisprudence which has been termed the jurisprudence of (Muslim) minorities (fiqh al-aqallīyāt). The term is believed to have been coined in a 1994 fatwa by Taha Jabir Alalwani, then the chairman of FCNA, which encouraged Muslim citizens to participate in American politics. This branch of jurisprudence has since been developed primarily, but not exclusively for Muslim minorities in the West.
[….] In the pre-modern era, most fatwas issued in response to private queries were read only by the petitioner. Early in the 20th century, the reformist Islamic scholar Rashid Rida responded to thousands of queries from around the Muslim world on a variety of social and political topics in the regular fatwa section of his Cairo-based journal Al-Manar. In the late 20th century, when the Grand Mufti of Egypt Sayyid Tantawy issued a fatwa allowing interest banking, the ruling was vigorously debated in the Egyptian press by both religious scholars and lay intellectuals.
In the internet age, a large number of websites has appeared offering fatwas to readers around the world. For example, IslamOnline publishes an archive of ‘live fatwa’ sessions, whose number approached a thousand by 2007, along with biographies of the muftis. Together with satellite television programs, radio shows and fatwa hotlines offering call-in fatwas, these sites have contributed to the rise of new forms of contemporary ifta. Unlike the concise or technical pre-modern fatwas, fatwas delivered through modern mass media often seek to be more expansive and accessible to the wide public.
Modern media have also facilitated cooperative forms of ifta. Networks of muftis are commonly engaged by fatwa websites, so that queries are distributed among the muftis in the network, who still act as individual jurisconsults. In other cases, Islamic jurists of different nationalities, schools of law, and sometimes even denominations (Sunni and Shia), coordinate to issue a joint fatwa, which is expected to command greater authority with the public than individual fatwas. The collective fatwa (sometimes called ijtihād jamāʿī, ‘collective legal interpretation’) is a new historical development, and it is found in such settings as boards of Islamic financial institutions and international fatwa councils.
As the role of fatwas on strictly legal issues has declined in modern times, there has been a relative increase in the proportion of fatwas dealing with rituals and further expansion in purely religious areas like Quranic exegesis, creed, and Sufism. Modern fatwas also deal with a wide variety of other topics, including insurance, sex-change operations, moon exploration, beer drinking, abortion in the case of fatal foetal abnormalities, or males and females sharing workplaces. Public ‘fatwa wars’ have reflected political controversies in the Muslim world, from anti-colonial struggles to the Gulf War of the 1990s, when muftis in some countries issued fatwas supporting collaboration with the US-led coalition, while muftis from other countries endorsed the Iraqi call for jihad against the US and its collaborators. In the private sphere, some muftis have begun to resemble social workers, giving advice on various personal issues encountered in everyday life.
The social profile of the fatwa petitioner has also undergone considerable changes. Owing to the rise of universal education, those who solicit fatwas have become increasingly educated, which has transformed the traditional mufti–mustafti relationship based on restricted literacy. The questioner is now also increasingly likely to be female, and in the modern world Muslim women tend to address muftis directly rather than conveying their query through a male relative as in the past. Since women now represent a significant proportion of students studying Islamic law and qualifying as muftiyas, their prominence in its interpretation is likely to rise. A fatwa hotline in the United Arab Emirates provides access to either male or female muftis, allowing women to request fatwas from female Islamic legal scholars.
The vast amount of fatwas produced in the modern world attests to the importance of Islamic authenticity to many Muslims. However, there is little research available to indicate to what extent Muslims acknowledge the authority of various fatwas and heed their rulings in real life. Rather than reflecting the actual conduct or opinions of Muslims, these fatwas may instead represent a collection of opinions on what Muslims ‘ought to think.’”
* * *
Readers wanting to investigate or explore some of the relevant material brought up in this post might look at the terms “fiqh” and “Sharī‘ah” in my study guide for Islam as well as my essay on “Democracy and Islam” (the latter is a bit dated here and there), both of which are freely available (to read or download) on my Academia page. It also helps to be conversant in the cross-cultural and comparative study of worldviews (religious and secular), a subject I have broached several times over the years at this blog.
Relevant Bibliographies
Posted at 04:15 AM in Patrick S. O'Donnell | Permalink | Comments (0)
We should not indulge in denial, nor deceive ourselves, in other words, we should not look away (i.e. deliberately ignore) from what is before our eyes and ears: the fascism of the cult of Trump is not going away anytime soon; a conclusion that is neither alarmist nor simply reflective of partisan hyperbole. If you believe otherwise, you are not paying sufficient attention. I am—or will be—quite happy to be proven wrong and welcome reasonable arguments that attempt to persuade me that I am indeed mistaken.
In early spring of last year I stated that the so-called Grand Old Party has morphed into a political party of imaginary and delusionary grievance, of crass and cartoonish schtick, of denial and desperation, of repugnance and regression, of illusion and irrationality, of empty gestures and vain cynicism, of authoritarianism and (actual and aspirational) fascism, of obscene wealth and amoral power, of sycophants and cults, of self-deception and phantasy, of white supremacy and narcissistic privilege, of Christian nationalism, a faux populism of bread and circuses that has failed to conceal, let alone contain, a degraded and debased political practice mired in a toxic dump of greed, corruption, and sleaze. It is a politics that lacks any meaningful disposition to truth and evidences no concern whatsoever for either Liberal constitutionalism or democracy. The anti-democratic and now nakedly fascist “Trump effect” shows few signs of going away, despite the surprising if not encouraging defeat in Kansas of the referendum proposal for a state constitutional amendment that would end protections for abortion. Recent primary results in Arizona and Michigan are disappointing for those of us hoping more Republicans would openly disavow the fascist cult of Trump. People still shrink back from characterizing Trump, his ideologues and many if not most of his followers as “fascist,” yet there is abundant evidence testifying to the aptness of this descriptive characterization, one piece of which follows:
“He has extolled the value of racial purity, is vehemently anti-immigration, has cultivated close ties with Russia’s Vladimir Putin and is a speaker at this week’s Conservative Political Action Conference, known as CPAC, in Dallas. Hungarian Prime Minister Viktor Orbán, 59, is widely criticized around the world for systematically dismantling his country’s nascent democracy during his 12 years in power — but that hasn’t stopped him from emerging as a darling of many on the right in America.
Former President Donald Trump and his onetime chief strategist Steve Bannon are also speaking at CPAC, America’s top conservative conference. And both are fans of Orbán’s. Trump endorsed Orbán in January, three months before he was re-elected to a fourth term, and Bannon called the Hungarian leader ‘Trump before Trump’ in a speech in Budapest in 2018.
While Trump was voted out, Orbán, the first European Union leader to speak out in support of Trump’s campaign in 2016, looks unassailable, with control over the media, the legislature and the judiciary in Hungary. Meanwhile, the fractured left-wing, centrist opposition is marginalized.
Fox News’ Tucker Carlson, who has interviewed Orbán and hosted his show from Budapest for a week in 2021, describes Hungary as a ‘small country with a lot of lessons for the rest of us.’ In January Carlson released a documentary titled ‘Hungary vs. Soros: Fight for Civilization’ — a reference to George Soros, 91, the Hungarian-born Jewish businessman and philanthropist who has become a scapegoat for Orbán and his allies. One Hungarian writer, Balázs Gulyás, said that in praising Orbán, Carlson had depicted Hungary as a ‘conservative Disneyland.’
Ahead of this week’s conference, Trump released a picture of him and Orbán together. ‘Great spending time with my friend,’ Trump said in a press release. ‘We discussed many interesting topics — few people know as much about what is going on in the world today. We were also celebrating his great electoral victory in April.’” — Patrick Smith for NBC News today (Aug. 4, 2022)
Recommended Reading
Several relevant bibliographies (embedded links):
Addendum
Please see, “‘Fight the Barbarians’: The MAGA Movement Lays Out a Warpath at CPAC”
By Tim Dickinson, for Rolling Stone, Aug. 6, 2022
First, my brief comment by way of an introduction, followed by highlights from the article.
These “dark, militant speeches” are not “thinly veiled calls for violence,” but the nakedly fascist rhetoric of those calling upon the Right to use violence against Democrats, whom they demonize in the kind of public rhetoric we find in incitement to genocide (which goes beyond ‘hate speech’) wherein in those one disagrees with, one’s political opponents, are characterized in Manichean (demonizing) and dehumanizing terms that place them beyond the pale as it were. This, to put it mildly, it is the anti-Liberal and anti-democratic rhetoric favored by fascists and authoritarians. It is an intensification and militarization of the insurrectionist language that preceded the Jan. 6 assault on the Capitol. And it is an exquisite exemplar of what psychologists term projection.*
* For brief definitions and more extensive introductions, see the term in Burness E. Moore and Bernard D. Fine, eds., Psychoanalytic Terms and Concepts (The American Psychoanalytic Association and Yale University Press, 1990): 149-150; as well as a more historically and theoretically oriented entry in Jean Laplanche and Jean-Bertrand Pontalis (Donald Nicholson-Smith, trans.), The Language of Psychoanalysis (Karnac Books, 1988 [Hogarth Press, 1973]): 349-355; and, relatedly, the term “projective identification” in Elizabeth Bott Spillius, et al., The New Dictionary of Kleinian Thought (based on A Dictionary of Kleinian Thought by R.D. Hinshelwood [1991]) (Routledge, 2011): 126-146.
Posted at 12:13 PM in Patrick S. O'Donnell | Permalink | Comments (0)
“There are two main questions we can ask ourselves with respect to the use of lotteries. First, under which conditions would they seem to be normatively allowed or prescribed, on grounds of individual rationality or social justice? Second, in which cases are lotteries actually used to make decisions and allocate tasks, resources and burdens? There is no reason, of course, to expect the answers to these questions to coincide. Hence we can generate two further questions. What explains the adoption of lotteries when normative arguments seem to point against them? What explains the non-adoption of lotteries in situations where they would seem to be normatively compelling? The last question is perhaps the most intriguing and instructive one. I shall argue that we have a strong reluctance to admit uncertainty and indeterminacy in human affairs. Rather than accept the limits of reason, we prefer the rituals of reason.” — Jon Elster, Solomonic Judgements: Studies in the limitations of rationality (Cambridge University Press, 1989): 36-37.
* * *
“Rigid application of some particular decision-making criterion to settle disputes might render adjudication less fraught with complexity and ambiguity. Depending on the criterion used, such application might even make the process of adjudication less partial. If the criterion is easy to apply, moreover, the costs of decision-making are likely to be reduced. One criterion which offers all of these qualities is random selection. Decision-making by lot is likely to be simple, objective, and cheap. ‘Cast lots and settle a quarrel, and so keep litigants apart’ (Proverbs 18: 18). Randomized decision-making will be, however, unreasoned. If a single argument rests at the heart of this book it is that, in law, faith in reason is sometimes maintained at too high a cost. The lottery, probably more than any other decision-making device, demands that this argument be accorded serious consideration.” — Neil Duxbury, Random Justice: On Lotteries and Legal Decision-Making (Oxford University Press, 1999): 175.
* * *
“Randomness is indistinguishable from complicated, undetected, and undetectable order; but order itself is indistinguishable from artful randomness.” — Nassim Nicholas Taleb, The Bed of Procrustes: Philosophical and Practical Aphorisms (Random House, 2010): 58.
* * *
As I noted in the comments to my syllabus that revolved largely around Athenian democracy in classical Greece, one of the topics prompted by that reading and research was “democratic rhetoric,” at least in its contemporary incarnation. I hope to compile a list for that subject soon. Meanwhile, I’ve completed another brief bibliography, once more, limited to twenty-five items (yet this list includes several journal articles), and likewise provoked by the aforementioned reading and research. With Professor of Law Neil Duxbury, I first became intrigued by this subject upon reading Jon Elster’s Solomonic Judgments: Studies in the Limitations of Rationality (Cambridge University Press, 1989), although it is also an important part of his later volume, Local Justice: How Institutions Allocate Scarce Goods and Necessary Burdens (Russell Sage Foundation, 1992).
Lots, Lotteries, and Sortition in Law and Politics: a very select bibliography
Posted at 05:07 PM in Patrick S. O'Donnell | Permalink | Comments (0)
This is one of the syllabi I’ve composed for myself this summer that touches on several areas of my research (e.g., democracy, law, moral psychology, virtues and vices, emotions, philosophy of mind and mental conflict ….). I’ve read some of these books in whole or part and am reading them again. I limited myself to 25 titles. I thought perhaps a few readers might be interested in this list.
One subject that has perked my interest of late is rhetoric, especially in an Aristotelian sense, both as it bears on democratic communication and discourse and insofar as it remains tethered to ethical or moral, logical and rational norms (hence when its function and appeal is not primarily emotional and when its arguments can be simultaneously sound and persuasive). Democratic rhetoric must be sensitive to the values, principles, and practices of democracy such that it should not be construed solely in instrumentalist terms, even if we remain, rightly, concerned about consequences. Democratic rhetoric can be crafted to suit its intended audience, which is not always “the people” simpliciter, but various “publics,” “groups,” “strata,” “politicians,” “classes,” and so forth. This does not mean it need appeal to or rely on existing (or ‘naked’) preferences or desires, on myopic or narrow self-interests, or on flattering its recipients or hearers (one might, however, allow for or encourage, especially at the beginning of a speech, what Cicero understood as captatio benevolentiae). Contrast this with the earliest forms of rhetoric in democratic Athens, wherein “the ideas and attitudes of the orators must reflect what the majority of their audience was only too glad to hear.” Indeed, inasmuch as it is democratic, it can and should often be (apart from descriptive or explanatory purposes, having to do say, with court rulings, legislation, or policy proposals) challenging or even provocative, motivating individuals and groups to see the proverbial big picture or appreciate the long-term … and so forth and so on (more later).
Finally, myriad and seemingly insuperable problems arise in a world of communication dominated by mass and social media and democratic rhetoric will have to speak to if not aim to solve at least some of these problems. At the very least, it will have to adhere to self-imposed norms and constraints fashioned with reasoned or reasonable and democratic warrant having to do with free speech, the value of which will be better appreciated if not cherished with the sustained and widespread praxis of such democratic rhetoric.
Posted at 08:27 AM in Patrick S. O'Donnell | Permalink | Comments (0)
Below one will find the “Epilogue” I recently appended to my latest iteration of the bibliography on Comparative Philosophy.
“A comparative philosophy that is worthy of both its constitutive terms cannot be simply about comparison. Simply comparing philosophies but not comparing them philosophically will not do. This is why fusion philosophy decidedly demands of the comparative philosopher not to be satisfied with the role of the comparatist. The comparative philosopher should aim beyond comparison at a philosophical argument (strictly or loosely understood) that can stand on its own, that is, it does not rely on the distinctness of the comparanda. The borders that any comparison necessarily erects should not be left intact as if untouched. Fusion means that borders that separate are at least de-emphasized, perhaps even torn into tatters, but in any case transcended.” — From the “Afterword/Afterwards” by Arindam Chakrabarti and Ralph Weber, co-editors of the volume Comparative Philosophy without Borders (Bloomsbury Academic, 2016).
* * *
“There are recognizable barriers from which men have always sought to emancipate themselves, in order to obtain access to something, and appropriate something, that is conceived time and again in the ideas of freedom, joy, happiness, etc., which no cynical irony can expunge. The inexhaustible possibilities of human nature, which themselves increase with cultural progress, are the innermost material of all utopias, and moreover a very real, and in no way immaterial material at that. They inevitably lead to the desire to transform human life.” — Rudolf Bahro
* * *
“The hypothesis proposed in Eunomia* suggests that a society constitutes itself not only in the form of law and legal institutions and not only in the real-world struggles, political and economic and personal, of everyday life, but also in society’s struggle about ideas. The self-constituting of the international society of the twenty-first century will be no different. The infrastructure of international social consciousness is now very substantial. [….] Among the ideas which help to constitute a society are ideas of a particular kind, ideas which have been referred to traditionally as ideals. Our ideals allow us to say what is wrong with our world and to imagine ways in which it could be made better, and they inspire us to want to make a better world. In the 1990s it was possible to see the emerging of something which might be construed as the universalizing of human ideals. The pathos of that development lies in the fact that it was accompanied by something which can only be called the universalization of social evil. [….]
Theory [and philosophy] is a form of practice, since practice at any given time inevitably takes the form which theory [or philosophy] makes available at any given time. We see the world in the form in which our ideas present the world to us. Since the world, the natural and the human world, is the arena of our action, our action is formed by the form of the world formed by our minds. We enact the drama which our minds have composed. We speak the language which is available to us, seeing our potentiality and our purposes in the reality which we are able to speak. To change our idea of our world, to speak about the world in a new way, is to change what our world will become. The road from the ideal to the actual lies, not merely in institutional novelties, or programmes and blueprints of social change, but also, and primarily, in a change of mind. A revolution in society is also and, above all, a revolution of the mind.
Eunomia suggests that a society forms itself through a three-in-one process of self-constituting, an unceasing interaction between ideas and practice and [after Seana Shiffrin, democratic] law, a society’s ideal, real, and legal constitutions. It follows that changes in any one of its forms of self-constituting affect the total process of a society’s self-constituting, and that large changes are liable to have large effects on that process. The history of human social progress suggests that new ideas have what may be called a prevenient effect on the self-constituting of society. New social theory [like utopian imagination and thought] cannot, by and in itself, cause a fundamental change in the institutional structure and the everyday practice of a society, social change being the effect of so many other causes, physical and organic, and practical, which go far beyond the self-contemplating of the human mind. What theory can do is to provide a framework into which social change flows, an available mould or matrix, enabling us to understand, to control, and to shape social change. Social philosophy cannot dictate the form of the new social institutions [hence Marx’s reluctance to specify the various components of a communist society in a manner that would satisfy social scientists and politicians, as well as impatient revolutionaries and reformers alike]. New social institutions arise as new social philosophy meets actual and potential social reality in a given society. New social philosophy is an organism waiting for life to be breathed into by new social [democratic] practice [which is participatory, deliberative, and representative]. New social practice needs new social philosophy to determine the form of its organic growth.
To change international social reality, we have to break the mould in which that reality has been formed. To take power over the future of international society, we have to take power over its self-constituting. To change the function and the functioning of the international social institutions which have been formed over recent centuries [largely thus not exclusively through the global entrenchment and incessant destabilizing, destructive, and demoralizing effects of the latest form of hyper-industrialized capitalism, which has finally and fully revealed itself as intrinsically hostile to equal liberties and freedom, humane and spiritual forms of community, as well as the natural world; undemocratic forms of political power have directly and indirectly aided and abetted these increasingly malign forces]. To take power over the social power which has caused so much social evil, so much human suffering over recent centuries, we have to use the form of power most readily available to us, the power of ideas. To bring order to the new world disorder which we have inherited from the twentieth century, we must use the ordering power of the human mind, its power to re-order its own order, and to bring order to disorder, the mind’s wonderful power to transcend itself and to cure itself.
And who are ‘we’? We are the people, nameless pawns in the game of diplomacy, human sacrifices in the rite of war. We are the people, permanent victims of the abuse of public and economic power—shackled in serfdom and slavery, herded like cattle into mines and factories and slums, into concentration camps and refugee-camps, driven at gun-point from our families and our homes, dehumanized by poverty and famine and disease, by the new slavery of consumerism and the mindless hedonism of popular culture. We are people with a permanent revolutionary possibility, the power to make a revolution, not in the streets [in which many of us would be slaughtered like factory farm animals] but in the mind. And the long journey of revolutionary change begins with a single revolutionary step. We can, if we wish, choose the human future. We, the people, can say what the human future will be, and what it will not be.” — Philip Allott, Eunomia: New Order for a New World (Oxford University Press, 2001)
* * *
Epilogue (to the bibliography on comparative philosophy)
Comparative philosophy can be more than a well-motivated philosophical, intellectual, and theoretical or speculative exercise, more than just a genuine “meeting of minds,” however significant that alone happens to be within the conditions of multiculturalism and pluralism, conditions defined and shaped by religious and secular worldviews in conjunction with cultural and political traditions and philosophies, conditions conspicuous for the perseverance of varying degrees of episodic and intransigent conflict that at times descends into famine, war or genocide. In other words, on occasion comparative philosophy can be part of if not central to a utopian or idealistic endeavor in the best sense, one with cosmopolitan pretensions that have profound and far-reaching political consequences capable of transcending the moral weaknesses and limits of political realism and raison d’état, thereby achieving a Rawlsian-like “overlapping consensus” with world-historical ramifications, as was the case with the Universal Declaration of Human Rights. In this instance, comparative philosophy was in some respects an unintentional by-product of the Human Rights Commission which met with and overcame considerable skepticism, uncertainties, and various obstacles that arose from “facts on the ground” (including great power politics and anti-colonialism):
“Was it really possible for a fledgling organization to produce a document acceptable to delegates from all the countries in a constantly expanding United Nations? By 1948, when the Declaration was put to a vote, the United Nations had fifty-eight member states containing four-fifths of the world’s population—twenty two from the Americas, sixteen from Europe, five from Asia, eight from the Near and Middle East, four from Africa, and three from Oceania. Could any values be said to be common to all of them? What did it mean to speak of certain rights as universal?
Anticipating such questions, the UN’s Educational, Scientific and Cultural Organization (UNESCO) recruited some of the leading thinkers of the day for a Committee on the Theoretical Bases of Human Rights. This blue-ribbon panel, chaired by Cambridge historian E.H. Carr, included University of Chicago philosopher Richard McKeon as rapporteur and French social philosopher Jacques Maritain, who became one of its most active members. In January 1947, as this group was coming together, UNESCO’s director, noted scientist Julian Huxley, had sent the poet Archibald MacLeish to the Human Rights Commission’s Lake Success meeting to apprise the commissioners of UNESCO’s interest in their work and its desire ‘to be as useful as possible.’ The philosophers’ group began its work in March by sending a questionnaire to statesmen and scholars around the world—including such notables as Mohandas Gandhi, Pierre Teilhard de Chardin, Benedetto Croce, Aldous Huxley, and Salvador de Madariaga—soliciting their views on the idea of a universal declaration of human rights. [….]
In 1948 the framers of the Universal Declaration achieved a distinctive synthesis of previous thinking about rights and duties. After canvassing sources from North and South, East and West [emphasis added], they believed they had found a core of principles so basic that no nation would want to openly disavow them [the General Assembly of the UN eventually adopted the Declaration without a single dissenting vote]. They wove these principles into a unifying document that quickly displaced all antecedents as the principle model for the rights instruments in force in the world today. [….]
The story of the parent document of the human rights movement [i.e. the Universal Declaration of Human Rights] is the story of a group of men and women who learned to cooperate effectively despite political differences, cultural barriers, and personal rivalries. It is an account of their attempt to bring forth from the ashes of unspeakable wrongs a new era in the history of rights. [….] [It] is to a large extent the story of a journey undertaken by an extraordinary group of men and women who rose to the challenge of a unique historical moment. The brief interlude between the end of World War II and the definitive collapse of the Soviet-American alliance lasted just barely long enough to permit major international institutions such as the UN and the World Bank to be established and for the framers of the Universal Declaration to complete their task. The members of the Human Rights Commission were well aware that they were engaged in a race against time: around them, relations between Russia and the West were deteriorating, the Berlin blockade raised the specter of another world war, the Palestinian question divided world opinion, and conflict broke out in Greece, Korea and China. [….] They had to surmount linguistic, cultural, and political differences and overcame personal animosities as they strove to articulate a diverse set of principles with worldwide applicability. [….]
With the exception of Eleanor Roosevelt, most of the members of the committee that shaped the Declaration are now little remembered outside their home countries. Yet they included some of the most able and colorful public figures of their time: Carlos Romulo, the Filipino journalist who won a Pulitzer Prize for his articles predicting the end of colonialism; John P. Humphrey, the dedicated Canadian director of the UN’s Human Rights Division, who prepared the preliminary draft of the Declaration; Hansa Mehta of India, who made sure the Declaration spoke with power and clarity about equal rights for women before they were recognized in most legal systems; Alexei Pavlov, the brilliant nephew of the conditioned-reflex scientist, who had to go the extra verst [the Russian equivalent of the ‘extra mile’] to dispel suspicions that he was still bourgeois; and Chile’s Hernán Santa Cruz, an impassioned man of the Left who helped assure that social and economic rights would have pride of place in the Declaration along with political and civil liberties.
Among the Declaration’s framers, four in particular played crucial roles: Peng-chun Chang, the Chinese philosopher, diplomat, and playwright who was adept at translating across cultural divides; Nobel Peace Prize laureate René Cassin, the legal genius of the Free French, who transformed what might have been a mere list or ‘bill’ of rights into a geodesic dome of interlocking principles; Charles Malik [a Lebanese academic, diplomat, philosopher, and politician], existentialist philosopher turned master-diplomat, a student of Alfred North Whitehead and Martin Heidegger, who steered the Declaration to adoption by the UN General Assembly in the tense cold war atmosphere of 1948; and Eleanor Roosevelt, whose prestige and personal qualities enabled her to influence key decisions of the country that had emerged from the war as the most powerful nation in the world. Chang, Cassin, Malik, and Roosevelt were the right people at the right time. But for the unique gifts of each of these four, the Declaration might never have seen the light of day. [….]
For everyone who is tempted to despair of the possibility of crossing today’s ideological divides, there is still much to learn from Eleanor Roosevelt’s firm but irenic manner of dealing with her Soviet antagonists; and from the serious but respectful rivalry between Lebanon’s Charles Malik and China’s Peng-chun Chang. There is much to ponder in the working relationship between Malik, a chief spokesman for the Arab League, and René Cassin, an ardent supporter of a Jewish homeland, who lost twenty-nine relatives in concentration camps. When one considers that two world wars and mass slaughters of innocents had given the framers every reason to despair about the human condition, it is hard to remain unmoved by their determination to help make the postwar world a better and safer place.” — Mary Ann Glendon, A World Made New: Eleanor Roosevelt and the Universal Declaration of Human Rights (Random House, 2001)
As Jack Donnelly has argued, this international consensus on human rights came about as (or prefigured) something much like—if not identical to—John Rawls’s later idea of an “overlapping consensus,” which provides a “descriptively accurate and morally attractive explanation” of how this particular international legal agreement emerged as a definitive expression of the “normative universality of human rights” (of course Rawls envisioned the possibility of this overlapping consensus arising on the political terrain of a liberal democratic nation-state, not in the international arena of nation-states). In other words, individuals and states, arguing from the premises of or taking a perspective generated from within their different “comprehensive doctrines” or religious and secular worldviews, were able to arrive at a consensual endorsement of the model of human rights embodied in the Universal Declaration, and thus this groundbreaking international human rights instrument was not founded upon or derived from any one particular religious or secular philosophy or worldview or “comprehensive doctrine.” This is one inspiring and ennobling exemplification of comparative philosophy enlisted on behalf of exemplary (and cosmopolitan) political and emancipatory values and purposes for the general benefit or welfare and well-being of individuals and humanity itself.
In addition to the Glendon title cited above, please see Jack Donnelly’s Universal Human Rights in Theory and Practice (Cornell University Press, 2003) as well as Johannes Morsink’s The Universal Declaration of Human Rights: Origins, Drafting, and Intent (University of Pennsylvania Press, 1999). For further research, one might consult my bibliography, Human Rights: Philosophical, Legal, and Political Perspectives.
* “In Greek mythology, Eunomia was a minor goddess of law and legislation (her name can be translated as ‘good order,’ ‘governance according to good laws’), as well as the spring-time goddess of green pastures (eû means ‘well, good’ in Greek, and nómos, means ‘law,’ while pasturelands are called nomia).”
Posted at 05:43 AM in Patrick S. O'Donnell | Permalink | Comments (0)
My latest compilation is on Buddhist Philosophy. What follows is the introduction to the bibliography.
This list covers philosophically oriented Buddhist titles from within Buddhism, as well as works by those who bring modern philosophical arguments and methods of analysis (and phenomenology, hermeneutics, etc.) to their examination of Buddhist philosophy, occasionally comparing it to other philosophical worldviews (hence those writing first and foremost as Buddhists on the one hand, and those identifying as professional philosophers or using contemporary philosophy to study Buddhist philosophy, on the other; there is occasionally overlap between the two approaches). It also has titles that examine Buddhist psychology and several works on topics in Buddhist aesthetics and philosophy of art. I have a separate bibliography on Buddhist art as well.
Like most of my bibliographies, there are two constraints: books, in English (a few exceptions owing to the Stanford Encyclopedia of Philosophy entries [SEP]). While this list is fairly comprehensive, it is not exhaustive, still, it should be useful for most purposes, as many of the titles have excellent bibliographies to supplement our list, containing the requisite primary source literature.
While it might reasonably be said that one part of the threefold division of the Eightfold Path (Skt., āryāṣṭāṅgamārga) in Buddhism represents philosophy, namely, prajñā (sometimes translated as ‘wisdom’), meaning insight into the “true nature of reality” (involving both ‘discriminating knowledge’ and ‘intuitive apprehension’), the other two parts are no less relevant: sīla or ethics and samādhi (concentration or one-pointedness, mind-training, and meditation). Morality and ethics are of course one branch of philosophy in the West, and samādhi has to do with the Buddhist philosophy of mind and psychology (and the philosophy of psychology). At the same time, the Eightfold Path represents Buddhist spiritual practices, the tripartite structure consisting of eight complementary and mutually reinforcing parts, even if Buddhism is sometimes reduced to or largely associated with meditation (which is three of the eight parts). In contemporary philosophical terms, Buddhist philosophy is “therapeutic,” that is, it is “philosophy as therapeia,” thus its philosophy is intimately bound up with its “spiritual exercises.”
Those new to Buddhism should take the time and trouble to acquaint themselves with this worldview’s fundamental concepts and doctrines, so, in addition to the Eightfold Path: the Four Noble Truths (i.e., the ‘four truths [known by the spiritually] noble’): (i) suffering (duḥkha/dukkha), (ii) the origin or cause of suffering (samudaya), (iii) the possibility for the cessation of suffering (nirodha), and (iv) the path by which we can eliminate suffering and achieve liberation from saṃsāra (termed nirvāṇa/nibbāna) (the medical analogy: symptomatic diagnosis, etiology, prescription and cure available, and implementing the therapeutic regimen), minimally speaking, it entails following the Eightfold Path; the four “divine abidings” or “immeasurables” (brahmavihāra), namely, (i) loving kindness (maitrī/mettā), (ii) compassion (karuṇā), (iii) empathetic joy (muditā), and (iv) equanimity (upekṣā/upekkhā), which are ethical virtues but also meditative topics; pāramitās, which are spiritual and ethical virtues or “perfections” developed and practiced by a bodhisattva on the path to becoming a Buddha: (i) generosity or giving (dāna), (ii) morality or ethics (śīla/sīla), (iii) patience or forbearance (kṣānti/khanti), (iv) effort, vigour, and diligence (vīrya/viriya), (v) concentration, “one-pointedness” or meditation (dhyāna/jhāna), and (vi) wisdom (prajñā), to which were later added (by the Mahāyāna tradition) (vii) skillful method or means (upāya), (viii) vow(s) (praṇidhāna), (ix) spiritual strength or power (bala), and (x) knowledge or intuitive pristine awareness (jñāna); pratītya-samutpāda/paticca-samuppāda or the “chain of dependent or conditioned origination” (this doctrine can be viewed as having metaphysical, logical, psychological, and ethical dimensions) thus, conditioned by (i) ignorance [avidyā/avijjā] are (ii) formations or karmic predispositions [samskāras], conditioned by formations is (iii) consciousness [vijñāna/viññāna], conditioned by consciousness is (iv) mind-and-body or name-and-form [nāma-rūpa/skandhas/kandhas], conditioned by mind-and-body are (v) the six sense fields (6 organs, 6 objects, and 6 kinds of sense consciousness) [sadāyatana/salāyatana], conditioned by the six sense fields is (vi) sense-contact [sparśa/phassa], conditioned by sense-contact is (vii) feeling [vedanā], conditioned by feeling is (viii) craving, or inordinate or improper desire [trsna/tanhā], conditioned by craving is (ix) grasping [upādāna], conditioned by grasping is (x) becoming [bhava], conditioned by becoming is (xi) birth [jāti], conditioned by birth is (xii) decay and death [jarāmaranam]; trilakṣaṇa/tilakkhaṇa, the three “marks” or characteristics of conditioned phenomena in saṃsāra: (i) impermance (anitya/annica), (ii) suffering (duḥkha/dukkha), of which there are three principal types, and (iii) no-self (anātman/anattā); skandha(s)/khandha(s): the five aggregates of “clinging,” that is, the five material and mental factors that take part in the rise of craving and clinging or grasping. They are also explained as the five factors that constitute and account for one’s character or personality or sense of personhood or personal identity, namely, (i) “form” or “matter” (rūpa), (ii) sensations or feeling (vedanā), (iii) perception or discrimination or distinguishing (saṃjñā/saññā), (iv) “conditioning factors” (saṃskāra/saṅkhāra) or the karmic effects on our mental activities and actions, and (v) consciousness vijñāna/viññāṇa; kleśa(s)/kilesa(s): mental afflictions that cloud and/or disturb the mind, tending to incite unwholesome deeds of body and speech and/or states of mind, often three (as ‘poisons’) in number: (i) greed or craving or inordinate often sensual desire (rāga or lobha), (ii) hatred or aversion (dveṣa/dosa), and (iii) delusion (moha), including connotations of confusion and ignorance (kleśas can include what we think of as the passions or passionate emotions but the term is not identical to ‘emotions,’ for which there is not a strictly equivalent term in Buddhism). And so forth and so on.
There is one indispensable reference source every serious student of Buddhism should have nearby: Robert E. Busswell, Jr. and Donald S. Lopez, Jr., The Princeton Dictionary of Buddhism (Princeton University Press, 2014).
Finally, before approaching Buddhist philosophy it helps to be acquainted with other Indian philosophies (the argumentative milieu). Thus I have a list for Indian philosophy, a compilation on Buddhism with helpful introductory titles, and a bibliography for Jainism for those fairly new to Buddhism generally and Buddhist philosophy in particular. And with respect to Buddhism interacting with classical Chinese worldviews (Confucian, Daoist, and Mohist, for example), please see the bibliography for same. I welcome notice of titles believed conspicuous by their absence here. (I apologize for the missing supra-script and subscript diacritic dots.)
Posted at 06:17 AM in Patrick S. O'Donnell | Permalink | Comments (0)
This post was prompted by a news item from the Los Angeles Times: “46 people believed to be migrants found dead in Texas tractor-trailer.”
Be it by land or water (the number of the latter being far more numerous than the former), these deaths are not accidents, unavoidable, or some senseless misfortune (leaving aside forced migration during war for now, which is a special case). They are a predictable result of inhumane, cruel, and unnecessary immigration policies. With the ecological and economic havoc caused by global warming, the migratory results of which we are already witnessing, matters will only worsen. We should not turn a blind eye, put our heads in the sand, or indulge in deliberate denial, however tempting during such troubled times as ours, when the scale and scope of our environmental, political, and economic problems can feel daunting if not overwhelming. These are our fellow human beings who want many of the things that most of us need and desire, first and foremost to live a life that reflects a commitment from those who possess the requisite influence and power to demonstrate a principled and practical concern for their welfare and well-being, to recognize their right to life, to show fundamental respect for one of the historical aims of Liberalism: meaningful individual and collective responses to universal human dignity, the inherent dignity of human beings. Consider, for example, the following from Allen Buchanan:
“Whether or not the notion that the international legal human rights system is grounded in and serves to affirm the inherent dignity of humans is a central feature of the system [I happen to believe it is an axiomatic presupposition of the system], it is surely at least a desideratum for a justification of the system that it can make sense of this notion given its prominence. [….] [T]he relevant notion of dignity can be understood to include two aspects. First, there is the idea that certain conditions of living are beneath the dignity of the sort of being that humans are. [….] Let us call this first aspect of dignity the well-being threshold aspect. The second aspect of dignity is the interpersonal comparative aspect, the idea that treating people with dignity also requires a public affirmation of the basic equal status of all and, again, that if they are not treated in this way they suffer an injury or wrong. [….] The well-being threshold aspect of dignity concerns whether one is doing well enough for a being of the sort one is; it makes no reference to how one is treated vis-à-vis others. The interpersonal comparative aspect has to do with whether one is being treated as an inferior relative to other people. The point is that one’s dignity can be respected in the well-being threshold aspect and yet may be compromised in the interpersonal comparative aspect.” — From Buchanan’s The Heart of Human Rights (New York: Oxford University Press, 2013): 99-100.
Some Relevant Bibliographies
Posted at 05:02 AM in Patrick S. O'Donnell | Permalink | Comments (0)
The right-wing judicial ideologues on the Supreme Court have only a tenuous grip on conservatism. Their recent rulings exemplify what they used to decry as “judicial activism,” rendering them inconsistent and hypocritical. Moreover, they’ve revealed an equally inconsistent if not incoherent, cynical, and manipulative legal interpretation of the doctrine of stare decisis. Their understanding of rights is illiberal given its ad hoc and regressive disposition. Finally, they’ve demonstrated the irrationality, vacuity and anti-democratic nature that shadows the “Originalist” doctrine of constitutional interpretation. Thus not surprisingly, their rulings warm the perfervid hearts and stir the addled minds of Christian nationalists and anti-democratic militants in this country, including first and foremost, the fascist members of the cult of Trump. I stop here lest my anger get the better of me.
See: “The Week from Hell,” by Eric Segall at Dorf on Law.
Posted at 12:33 AM in Patrick S. O'Donnell | Permalink | Comments (0)
Hilary Putnam’s “scientific realism” (there are several conceptions of this realism) entailed, among other things and for example, “the affirmation of the theoretical or ‘unobservable’ entities of our successful sciences, especially physics, e.g. the reality of electrons.” I would argue, as have others in philosophy and psychoanalysis, that psychoanalysis as a science1 likewise affirms the existence of theoretical or unobservable entities (that make for the ‘reality’ of unconscious and ‘primary process’ mentation which is ‘a-rational and associatively-based,’ this unconscious reality can be both ‘dynamic’ and cognitive or ‘adaptive’) with epistemic and explanatory effects or consequences, for instance, the corresponding mental states are causes of desires, beliefs, and behavior … and psychic disorders of various kinds (contrast ‘secondary process mentation,’ which involves awareness, is largely rational, rule-following, and ‘logical’ in the manner of common sense, allowing us to see psychological events as continuous or determinist, causal, rational, and explainable). This notion of primary process mentation (along with secondary process mentation) is part of the psychoanalytic theory or philosophy of mind which, after Linda A. W. Brakel, I understand to be fundamental or foundational in the sense that it is the “core psychoanalytic theory with the clinical theory (or theories) derivative thereof.”
As philosophers of science can readily attest, “all scientific theories, methods and techniques have a discrete number of basic and underlying and necessary presuppositions.” Brakel proffers “five foundational precepts [or propositions] that comprise the basic presuppositions of psychoanalysis.” She notes that while there is not agreement among psychoanalysts about the precise nature and number of these presuppositions and assumptions, they should be able to agree that these presuppositions and assumptions are “… (i) taken for granted, (ii) used to derive other psychoanalytic concepts and propositions, and when functioning (iii) generate psychoanalytic data … [where] psychoanalytic data most broadly include everything reported and taking place [largely or eventually, by way of ‘free association’] in psychoanalytic sessions—e.g., dream contents; psychological symptoms …; [changes in] … mood; sexual concerns; slips of the tongue and parapraxes …; phantasies and daydreams; and reports of strong feelings about past persons and persons in the present, including the analyst.” The following are the first three assumptions (I, II, and III), followed by “one methodological tool” [or method] (IV), and one corollary (V):
(I and II) — “The first two assumptions, psychic (psychological) continuity and psychic (psychological) determinism are best considered together. They are psychology-specific versions of two general assumptions that are in place given any scientific theory. Continuity presumes some sort of regularity or lawfulness in the phenomena under study,2 and determinism means simply that the operation of cause and effect is presumed. To assume psychic continuity then is to take for granted that all of the agent’s psychological events—including those that look inconsistent or even incoherent such as pathological symptoms (for instance, phobias to water, benign animals or of open places), slips of the tongue, and parapraxes—are regular/lawful in a particular psychological way for that agent; namely that every psychological event can be understood as psychologically meaningful to that individual. Similarly, to assume psychic determinism is to presume that all psychological events –even those that look incoherent—have at least as one of their causes, a psychological cause, and can thereby be explained (at least in part) on a psychological basis. So for example, a dream element, a delusion, a confabulation all can be presumed to be necessarily regular and lawful phenomena with physical and psychological causes such that these phenomena cannot fail to be psychologically meaningful to the dreaming, delusional, or confabulating agent. Put another way, there is no dream element, delusion, or confabulation, in fact no psychological content possible for a particular person that will not have been caused by some aspect of that person’s psychology and that will not thereby be meaningful to that person.”
(III) — “The third assumption of psychoanalysis is that there exists a dynamic (psychologically meaningful) unconscious. It is posited because without such a postulate many psychological events ‘seem’ neither psychically continuous nor determined. [By way of example, Brakel proceeds to narrate a vivid account of a slip of the tongue she experienced.] [….] [T]he assumption of interceding unconscious processes and contents allows psychological determinism and continuity to be evident generally, even in those psychological events (such as neurotic symptoms, dream elements, delusions, and hallucinations) seemingly inconsistent to the point of frank bizarreness.”
(IV) — “The one methodological tool necessary for psychoanalysis is free association. Free association as part of the foundational core of psychoanalytic general theory has an interesting status in that it functions in a dual manner. First, free associations demonstrate apparent violations of psychic continuity and psychic determinism by revealing psychological events like parapraxes and symptoms that seem incoherent, with no meaningful psychological cause. Second, free associations resolve the apparent violations in continuity and determinism by providing the psychological contents, which when taken in conjunction with the assumption of a dynamic unconscious, can render what was seemingly inconsistent as quite continuous, now admitting of transparent psychological causation.” [Brackel then provides a compelling illustration of the dual functioning of frees association from the analysis of one of her patients.]
(V) — “The final element comprising the foundational structure of psychoanalytic general theory—positing primary and secondary processes as two formally different types of mentation—is best described as a corollary to the other fundaments. The corollary status obtains because positing primary and secondary processes follows from and is demonstrated by the application of the three [foregoing] basic assumptions and free association. [….] Secondary process thinking is the largely rational, rule-following, ordinary logic of adults in the alert, waking state. Primary process thinking, in contrast, is a-rational and associatively based. When the secondary processes predominate, psychological events look continuous, caused, explainable, and rational. [….] Primary process thinking, on the other hand, is clearly not rational. Instead, it is a-rational. [….] Its hallmarks, in addition to the absence of ordinary rationality, displacement, and the type of feature-based categorizing by resemblance shown above [i.e., the two personal examples she provided in III and IV above that I’ve left out], include condensations (combining thoughts that by ordinary logic do not belong together), categorizations by contiguity in time and/or space, and substitutions of part for whole. Dream elements perhaps provide the most obvious, plentiful, and accessible examples of primary process contents demonstrating the effects of the operations (in all combinations) of these many a-rational processes.” [Brakel informs us in a note why she chose the term ‘a-rationality’ rather than irrationality: ‘A-rationality’ is much broader in its application than is ‘irrationality,’ the term more typically employed. Whereas irrationality implies rationality that has gone astray, entailing a dispositional capacity for rationality, a-rationality implies no such thing. Thinking, for example, can be a-rational if it is ‘not-yet-rational’ as in very young humans and ‘never-to-be-rational/but good enough for survival’ as in certain birds and mammals.’]
Notes
1. It is a science of subjectivity or the person, one that is in part parasitic on existing natural and social sciences and the humanities, although its philosophy of mind and psychological theory account for both its novelty, including its clinical practices within traditions of care and healing of the psyche generally and mind-body medicine or therapeutics in particular.
2. As Brakel notes, this is not “tightly constrained,” which is one way of saying science deals in generalities, that theories are schematic insofar as they introduce “order” into experience, some sort of taxonomy or classification of the relevant empirical facts; the taxonomy is scientific to the extent it is sufficiently universal so as to be employed consistently by the relevant community of scientists; at bottom it is a matter of “pattern recognition.” Any such classification is a “theory-laden” activity that unavoidably involves some reference to the surrounding intellectual and social environment. Philip Kitcher and other philosophers of science have shown how theory construction strongly resembles a cartographic practice, our theories being very much like maps. In the words of John Ziman:
“Almost every general statement one can make about scientific theories is equally applicable to maps. They are representations of a supposed ‘reality.’ They are social institutions. They abstract, classify and simplify numerous ‘fact.’ They are functional. They required skilled interpretation. And so on. The analogy is evidently more than a vivid metaphor. [….]
Thus to state that a certain ‘effect’ has a certain ‘cause’ merely corresponds to pointing to an ‘itinerary’ on a more general theoretical map. Such a statement is meaningless unless combined with other information about the scientific context in which it is to be interpreted—that is to say, how it is connected with other statements of cat or theory about the entities involved. Indeed, scientific theories can often be mapped as abstract networks, where nodes of fact and/or concept are cross-linked in many dimensions by laws, formulae, family resemblances or other functional relationships. [….]
… [S]cientific theories have to be understood as purposeful generalizations. Indeed, one of the achievements of the social sciences is to provide people with unsuspected ‘meanings’ for many cultural features of their lives. The entities that figure in a scientific theory are selected and simplified to suits its scope and function. Scientific theories, like maps, are under-determined. They are products of their time and place. They emerge out of the exercise of originality and scepticism in a disputatious community. Of course there are moments when a novel scientific theory seems precisely right. But its form and substance are neither pre-ordained or permanent. Even the most compelling theory is usually shaped by unconscious aesthetic and utilitarian criteria. [….] Even good scientific theories, like good maps, can present the same ‘domain’ in a great variety of very different forms. [….] [It is of course important to keep in mind that] a map is not the same as the geography it represents.”
Ziman makes several more points about scientific theorizing applicable to psychoanalysis. First, scientific knowledge and reasoning are not “ruled by formal logic.” In other words, algorithmic compression and mathematical formalism are not absolutely necessary for scientific theories if only because “not all the scientifically observable features of the world can be measured, and not all the results of scientific measurement can properly be treated as variables in mathematical formulae.” We must be alert to the “well-known danger of making unrealistic or over-simplified assumptions about real-world entities in order to set up a tractable mathematical model of their behaviour. As Hilary Putnam reminded us and Ziman reiterates, “the domain of science [and rationality itself] extends far outside the scope of formal reasoning,” and “scientific reasoning varies from discipline to discipline. [….] According to the circumstances, valid scientific reasoning may involve the evaluation of testimony, empathic understanding of human behaviour, pattern recognition, category formation, classification, generalization, analogy, unification and, above all, the grammar of a natural language. [….] In effect, scientific rationality is no more than practical reasoning carried out as well as possible in the context of research.”
Coda
As Agnes Petocz writes in her persuasive contribution to the edited volume by Boag, Brakel and Talvite, “Almost everyone agrees that the question of the scientific status of psychoanalysis has been ‘done to death.’” But neither side has budged in this often passionate and sometimes acrimonious debate. I think Ziman’s book (cited in the references below) helps us clarify how it is not at all difficult to explain the several ways in which psychoanalysis can amply qualify as a science. On the other hand, and ironically, mainstream academic psychology, which fancies itself as a “real” (if not ‘hard’) science in contrast to the fraudulent credentials of psychoanalysis, is actually on quite brittle grounds and fragile footing insofar as it is “shot through with misconceptions of science and is thus scientistic rather than genuinely scientific.”
References: Most of the material for this post was taken from Linda A.W. Brakel, Philosophy, Psychoanalysis, and the A-rational Mind (Oxford University Press, 2009). She expands a bit on this argument in chapter six of the volume she edited with Simon Boag and Vesa Talvite: Philosophy, Science, and Psychoanalysis: A Critical Meeting (Routledge, 2018 [Karnac Books, 2015]). The extensive material from John Ziman in the second note is from his book, Real Science: What it is, and what it means (Cambridge University Press, 2000).
Relevant Bibliographies
Posted at 07:30 AM in Patrick S. O'Donnell | Permalink | Comments (0)
Propaedeutic
“Science is under attack. People are losing confidence in its powers. Pseudo-scientific beliefs thrive. Anti-science speakers win public debates. Industrial firms misuse technology. Legislators curb experiments. Governments slash research funding. Even fellow scholars are becoming sceptical of its claims. And yet, opinion surveys regularly report large majorities in its favour. Science education expands at all levels. Writers and broadcasters enrich public understanding. Exciting discoveries and useful inventions flow out of research laboratories. Vast research instruments are built at public expense. Science has never been so popular or influential.
This is not a contradiction. Science has always been under attack. It is still a newcomer to large areas of our culture. As it extends and become more deeply embedded, it touches upon issues where its competence is more doubtful, and opens itself to well-based criticism. The claims of science are often highly questionable. Strenuous debate on particular points is not a symptom of disease: it signifies mental health and moral vigour.
Blanket hostility to ‘science’ is another matter. Taken literally, that would make no more sense that hostility to ‘law,’ or ‘art,’ or even ‘life’ itself. What such an attitude really indicates is that certain general features of science are thought to be objectionable in principle, or unacceptable in practice. These features are deemed to be so essential to science as such that it is rejected as a whole—typically in favour of some other supposedly holistic system.
The arguments faovouring ‘anti-science’ attitudes may well be misinformed, misconceived and mischievous. Nevertheless, they carry surprising weight in society at large. Those of us who do not share those attitudes have a duty to combat them. But what are the grounds on which science should be defended?
Many supporters of science simply challenge the various specific objections put forward by various schools of anti-science. In doing so, however, they usually assume that the general features in dispute are, indeed, essential to science. They may agree, for example, that scientific knowledge is arcane and elitist, and then they try to show that this need not be a serious disadvantage in practice. The danger of this type of defence is that it accepts without question an analysis which may itself be deeply flawed. In many cases, the objectionable feature is incorrectly attributed to ‘science,’ or is far from essential to it. Dogged defence of every feature of ‘the Legend’—the stereotype of science that idealizes its every aspect—is almost as damaging as the attack it is supposed to be fending off.”—John Ziman, from the opening pages of his book, Real Science: What it is, and what it means (Cambridge University Press, 2000).
* * *
“[Hilary] Putnam’s denial that there is a unique and complete description of the world in some metaphysically privileged vocabulary (say, the language of the natural sciences) reflects his commitment to conceptual pluralism. For example, a chair can be usefully and truthfully described in the language of physics, of carpentry, of furniture design, or of etiquette without it being the case that these vocabularies are reducible to some favored or fundamental vocabulary.” — From the editors’ Introduction to Putnam’s Philosophy in an Age of Science: Physics, Mathematics, and Skepticism (Harvard University Press, 2012)
* * *
“After the uncritical rejection of science and the postmodernist (and ill-informed) dismissal of its claims to objective knowledge, there has in recent decades been an equally uncritical embrace of the dubious idea that the natural sciences, particularly the neurosciences, have something important to say about, even have the last word on, art, ethics, politics, the law, economics; that they may transform the study of those topics into a properly grounded discipline; or, more ambitiously, that brain science is, or will be, the key to understanding humanity. Humanities will (at last) come of age as animalities. New interdisciplinary pursuits have emerged prefixed by ‘neuro,’ ‘evolutionary,’ or even ‘neuro-revolutionary,’ embodying the hopes of advancing our understanding of the law, of ethics, of aesthetic experiences.” — Raymond Tallis, Seeing Ourselves: Reclaiming Humanity from God and Science (Agenda Publishing, 2020): 21
I am assuming that, like me, most of you are not scientists (no doubt some of you are). Furthermore, you likely have in mind some notion or picture of what science is, leaving aside for now how you arrived at this conception (e.g., testimony, formal and informal learning, ‘authority’ of one kind or another, inferences and speculation, etc.). I am going to conjecture, with all due respect, that what you happen to believe about science—be it tentatively or confidently—is likely, or for the most part, mistaken or at least radically incomplete. One reason for this, I suspect, is owing to ideological conceptions of science that have arisen both spontaneously and deliberately from various quarters and precincts our capitalist and hyper-industrialized society and culture (a topic for another day). One prominent form of such ideology goes by the name “scientism,” which has a religious like faith in the properties, virtue, and power of (especially ‘natural’) science(s) (and technology, for that matter), often viewing physics as exemplifying same, an ideal archetype or model of what science can and should be.
I confess to having the temerity or perhaps chutzpah to propose a list (the number of titles being fairly small as such things go) of works that I have found helpful in sketching what I take to be the best conception(s) of science: what science is and should be. Being neither a scientist nor a philosopher should disqualify me from undertaking such a task, but one of the liabilities of intrinsic to an independent researcher and autodidact* in many fields of intellectual inquiry is that one feels free to question prevailing forms of intellectual and moral expertise and authority, however necessary they remain to our collective coexistence, welfare and well-being. This list is thus owing to my (perhaps idiosyncratic) reading regimen in science and the sciences, including philosophy of science, although it is safe to say that I cannot keep track of the latest literature in this regard, especially insofar as that means academic journal articles for which I often lack the privilege of access.
That said, here is my list of twelve titles I think a “layperson” can read by way of getting a more or less true or better conception of science than the one most of us currently entertain. Our authors of course do not always agree with each other on this or that feature, purpose, or value of science, but these works have strong family resemblance with each other to the extent that the follies and vices of scientism or what John Ziman called “the Legend” are studiously avoided.
Further Reading: Please see, first, my Notes on Science and the Sciences, and then the compilation on “Sullied Sciences,” which has an appendix with a much larger list of titles should your appetite be whetted by the material above. An exemplification of science utilized on behalf of both democratic and socialist principles, values, and practices is found in this bibliography: Otto Neurath & Red Vienna: Mutual Philosophical, Scientific and Socialist Fecundity. Finally, a list which I hope to update soon may also be of interest: Ethical Perspectives on the Sciences and Technology. All of these items are freely available for viewing or download on my Academia page.
* I am “self-taught” in the sense that I lack proper credential and academic degrees in the domains I am speaking about. Thus I am not, nor is anyone for that matter, literally “self-taught.”
Posted at 04:32 PM in Patrick S. O'Donnell | Permalink | Comments (0)
None of what follows this introduction is written by yours truly, still, I thought it might be of interest to some of our readers. It concerns subject matter I hope to address now and again over the summer. Long-time readers of this blog will be aware of my abiding interest in the values and purposes of psychoanalytic theory and therapy, especially in the wake of Adolf Grünbaum’s critique of psychoanalysis,1 as it provoked vigorous philosophical arguments and debates both about the scientific standing of psychoanalysis in general as well as specific beliefs, hypotheses, and theories within psychoanalysis in particular, some of which do not depend upon or pivot around the science question, and which I happen to believe has been resolved in favor of Freud and psychoanalytic psychology (be it Freudian, Kleinian, etc.). Psychoanalysis can thus in part be characterized as a novel science of human subjectivity (the other principal part being clinical therapeutic analysis) which encompasses fundamental properties of human nature, the constitution of the individual person or personhood, and the nature (hence philosophy) of mind. It is “novel” in the sense that it is neither simply a natural nor a social science while partaking of and transcending both, bringing to the fore, among other things, an extension of so-called folk psychology, a richer and more complex philosophy of mind, as well as model of human development and individuation. It addresses the possible and plausible reasons for as well as the prospects for the relief of, mental, existential, and physiological kinds of human pain and suffering as a mind/body therapy. It builds upon and in some respects still resembles modes of healing and “care of the psyche” found in shamanic, religious, magico-religious or alchemical, medicinal, psychological, philosophical and other traditions. And given prior attempts at integrating Marxism and psychoanalysis, I think this science of human subjectivity can, when combined with Marxist sensibilities, contribute to filling out a reasonable if not persuasive biosocialpsychological model that does justice to individual human beings, groups, and the contexts, settings, and situations they find themselves in that either enable and facilitate, constrain, or inhibit our efforts to attain human happiness (if not eudaimonia), welfare and well-being. In our follow up post, we will briefly look at the nature of the placebo effect in psychoanalysis, as well as, and perhaps more importantly, examine a psychoanalytic explanation of the psychological mechanisms underlying the placebo effect proffered by Linda A.W. Brakel.2
* * *
“Although the modern use of the term [placebo] and an awareness of how frequently the placebo effect is a factor in treatment have a relatively short history, it has frequently been argued that the placebos have been at the heart of therapeutics during most of medicine’s history. [….] But with so much in the history of successful therapeutics retrospectively attributed to the unknowing use of placebos, it is well to be reminded that, however unaware most healers may have been, the knowing use of placebos dates back a very long way. [….] [I]n the Physical Ligatures of Qusṭā ibn Lūqā [al‐Ba῾labakkī] (ca. 820–912), the placebo effect and suggestion were given a significant place in therapeutics. [….] His Physical Ligatures was ‘a learned “high medicine” text on the empirical use of magic,’ in which he made the point, ‘on no less authority than Plato,’ that ‘the mere belief in the efficiency of a remedy will indeed help in a cure;’ and numerous manuscripts and early modern printings suggest that his work was widely read in the West. Qusṭā maintained that ‘a benefit of some drugs in some circumstances is the effect that they have on the mind, provided patients believe them to be remedies. Most of his prescriptions involved ‘the power of persuasion or suggestion;’ and he clearly recognized the placebo effect when he said that ‘the action of a medicine may be no more than the effect the suggestion has on the mind.’” — Stanley W. Jackson, Care of the Psyche: A History of Psychological Healing (Yale University Press, 1999): 281-282.
* * *
“Given that placebos are such a powerful treatment on their own, we might ask ourselves: why are they not being used as a treatment more widely? One of the biggest barriers is an ethical dilemma. On the one hand, placebos are highly effective for certain symptoms and conditions, and can have a real therapeutic effect. On the other hand, to benefit from placebos, the predominant thinking has been that people need to be misled into believing they’re taking an active treatment. Since most medical authorities worldwide have agreed – for good reasons – that lying to patients isn’t a best practice, this reliance on deception has prevented the widespread use of placebos as treatments in and of themselves.
… [There is an] emerging research trend [that studies the] possible beneficial effects of placebos given without deception, also known as ‘open-label placebos’ or ‘non-deceptive placebos.’ In a foundational study in 2010, researchers at Harvard Medical School randomised patients experiencing irritable bowel syndrome (IBS) symptoms into either an open-label placebo group or a no-treatment control group – and crucially, all the patients knew which group they were in. The researchers told patients in the open-label placebo group that the placebo effect is powerful, that the body can respond automatically to taking placebo pills (similar to the classic conditioning example of Pavlov’s dogs, who salivated at the sound of the dinner bell), that a positive attitude helps but is not required, and that it is vital to take the pills faithfully for the entire 21-day study period, regardless of their belief in the pills. By the end of the study, even though the placebo pills contained no active ingredients, and despite the patients knowing they’d been taking placebos, they reported fewer IBS symptoms and more improvement in overall quality of life than patients in the no-treatment control group. [….]
So what’s really going on here? It’s not the sugar pill itself that leads to these changes in psychology and physiology, and it’s not magic either. Research in medicine and psychology on both traditional and open-label placebos suggests several mechanisms at play.
One is people’s expectations, or the positive belief that a treatment might have beneficial effects. In open-label placebo studies … people are often told that a belief in the placebo isn’t necessary, but they are encouraged to keep an open mind. Some of the clinical studies have involved volunteers for whom many other treatments have failed, and so they have added reason to hope that this experimental, slightly unorthodox treatment might work for them. Emerging research suggests that this belief might be partially responsible for the benefits. [….] Another possible mechanism is conditioning, in which the body learns to associate beneficial effects with an action or ritual. Many of us have had repeated experiences of taking pills that help reduce our symptoms – ibuprofen for a headache, NyQuil for a cold, or Pepto Bismol for an upset stomach. Over time, the body may learn to associate taking a pill with symptom relief. So the very act of taking a pill itself can catalyse the body’s own capacity for healing.
This conditioning is sometimes done explicitly in research with open-label placebos. In one clinical study, researchers asked patients recovering from spine surgery to pair their active pain medication with open-label placebos and also to take the placebo pills on their own. The placebo pills began exerting their own pain relief. Compared with the control group who received treatment as usual, patients who also took the open-label placebo pills consumed approximately 30 per cent less daily morphine in the days after surgery.
There are also other, less well-studied mechanisms that may be at play in open-label placebo effects. For example, when someone starts taking a treatment – placebo or not – they often begin paying closer attention to their own minds and bodies. Most conditions and symptoms fluctuate over time. For example, when we are experiencing a headache, even if we don’t take any medication or other action, the severity of that headache will naturally decrease over time. People who take open-label placebo pills may hope for improvement, making them more attuned to times when their symptoms subside. Other research shows that medical rituals – whether that’s taking a pill, getting an injection, or merely having a cup of tea and taking a hot bath – can evoke both expectations for healing and a conditioned response. Thus, the act of taking pills faithfully can become a healing medical ritual in and of itself.
Now that we are seeing an accumulation of evidence that open-label placebos might be helpful, researchers and clinicians are starting to think about how to apply them in practice to benefit patients. [….] Current and future research is continuing to shed light on which conditions open-label placebos might be best-suited to. As the field grows, a debate must follow: will open-label placebos ever become part of mainstream medicine? Is it better to focus efforts on convincing doctors (and patients) that open-label placebos can be effective, or should we better understand the mechanisms of open-label placebo effects and try to harness those mechanisms in conjunction with active medications and treatments, such as by boosting patient expectations? Will open-label placebos ever be more than a semi-fringe last resort for conditions and patients for whom most other treatments have failed?
Of no small consideration is the fact that, with little money to be made from prescribing sugar pills, the influential pharmaceutical industry has no incentive to promote this kind of medication over patented, privatised medications and treatments. In many ways, research on open-label placebos is still in its infancy. The next 10 years may determine the ultimate impact of this research. As the field progresses, one of us (Darwin) plans to continue to investigate and optimise open-label placebo effects on stress, anxiety and depression in both clinical and non-clinical settings.” — Darwin A. Guevarra and Kari A. Leibowitz, “Why placebo pills work even when you know they’re a placebo,” Psyche, March 9, 2022.
* * *
“… [M]any remarkable healings of the past that had been attributed to magic, to miracles, to faith, or imagination in the nineteenth and early twentieth centuries came to be considered the results of suggestion, and more recently these same healings have been attributed to the placebo effect. Saying that suggestive effects are nothing but placebo effects, or vice versa, seems an acknowledgment that these two phenomena overlap one another. At the very least the suggestion of help is usually an element in the use of placebo. Further, among the factors that [Arthur K.] Shapiro3 views as significant in the placebo effect are faith, hopeful expectations, the healer’s manner and attitude, and the doctor-patient relationship, all of which have been considered significant for successful suggestion in healing contexts.” — Stanley W. Jackson, Care of the Psyche: A History of Psychological Healing (Yale University Press, 1999): 282
Notes
References & Further Reading
Relevant Bibliographies
Posted at 05:30 AM in Patrick S. O'Donnell | Permalink | Comments (0)
Some of what follows is from something I first posted in 2008 at Daniel Goldberg’s Medical Humanities Blog (and cross-posted at Ratio Juris). I have only slightly revised it. The remainder of the material having to do with Alcoholics Anonymous (AA) is new.
The title of this post refers to two books: Theodore Dalrymple’s Romancing Opiates: Pharmacological Lies and The Addiction Bureaucracy (Encounter Books, 2006) and the late Herbert Fingarette’s Heavy Drinking: The Myth of Alcoholism as a Disease (University of California Press, 1988). The second half of the post addresses some features of AA, although, as you will see, Fingarette’s book is germane to both parts.
The Wikipedia entry on Dalrymple introduces our author: “Anthony (A.M.) Daniels (born 1949) is a British writer and retired physician (prison doctor and psychiatrist), who generally uses the pen name Theodore Dalrymple. He has written extensively on culture, art, politics, education and medicine, drawing upon his experience as a doctor and psychiatrist in Zimbabwe and Tanzania, and more recently at a prison and a public hospital in Birmingham, in central England.”
In a brief description of the views that animate his writing, the entry notes that Daniels “contends that the middle class abandonment of traditional cultural and behavioural aspiration has, by example, fostered routine incivility and ignorance among members of the working class. Occasionally accused of being a pessimist and misanthrope, his defenders praise his persistently conservative philosophy, which they describe as being anti-ideological, skeptical, rational, and empiricist.”
I happen to be an unabashed Marxist in political economy (largely along the lines of the ‘analytical Marxists’) while, at the same time, subscribing to many of the democratic values and principles found in the Liberal tradition of political philosophy. Nevertheless, risking inconsistency and contradiction, I feel free to draw upon, say, classical Greek thought, natural law traditions (which need not be religious, hence they can be metaphysical yet secular1), and even anarchist political philosophy if it suits my fancy. And this is in reference only the political and economic parts of my worldview: the broadly philosophical and spiritual parts are principally of Asian provenance. In short, my own worldview is a hodgepodge or motley, in the words of the late Ninian Smart: “Our values and beliefs are more like a collage than a Canaletto. They do not even have consistency of perspective” (Religion and the Western Mind. 1987: 17). While consistency for my lifeworld may be an elusive virtue, I trust coherence is not, especially if one views the parts of the motley as reflective of an intellectual or epistemic and ethical division of labor. All of this by way of accounting for the fact that while Daniels is clearly a consistent conservative, that does not preclude me from finding this particular book persuasive on many counts, a reminder that, at least on what are sometimes confusingly called “cultural matters,” I too, on occasion and some topics, can be conservative, or at least Confucian, wherein in the notion of li involves an ethically formed approach to matters of etiquette, social norms, and perhaps rules generally.2
For some unjustifiable reason, Dalrymple’s book lacks notes of any kind, which is inexcusable, especially with regard to the extensive quoting of others, nor is there a bibliography, and the index is far from complete. As I was digesting the argument it called to mind Herbert Fingarette’s controversial but equally provocative and, by my lights, well-argued book, Heavy Drinking: The Myth of Alcoholism as a Disease (1988). It turns out that Dalrymple/Daniels thought so too, for near the end of the book we learn that “[w]hat Fingarette said of alcoholism can be applied with equal force to opiate addiction,” namely, that “the addict has a problem, but it is not a medical one: he does not know how to live. And on this subject the doctor has nothing, qua doctor, to offer.”
According to Daniels, “The temptation to take opiates, and to continue to take them ... arises from two main sources: first, man’s eternal existential anxieties, to which there is no wholly satisfactory solution, at least for those who are not unselfconsciously religious; and second, the particular predicament in which people find themselves. Modern societies have created, or at least resulted in, a substantial class of persons peculiarly susceptible to what De Quincy calls ‘the pleasures of opium.’” Now it is the second source that Daniels elaborates upon in the following:
“ ... [I]n most western societies [we might ask ourselves, just what is conspicuously ‘modern’ and ‘western’ in such societies!], there is now a class in which tedium vitae is very common, almost normal. This is the class from which the great majority of heroin addicts now comes…. The young of this class are disaffected, and have good reason to be so. They are for the most part poor, though not of course in the absolute sense. On the contrary, they are healthier, better fed, dressed, and sheltered than the great majority of the world’s population, past and present, and dispose of appurtenances whose sophistication would have astonished our forefathers. But they are poor in the context of their own societies (which is what counts psychologically [such ‘relative poverty’ counts in other ways too, as Amartya Sen, and Adam Smith before him, has argued3]) and they are so badly educated (this time in the absolute sense) that any historical or geographical comparison, by means of which they might put their poverty in some kind of perspective, is completely beyond them [They may be poorly educated, formally speaking, but I suspect it is the informal education and upbringing that is more a problem here].
They have no interests, intellectual or cultural. The consolations of religion are closed to them. As for their family lives, loosely so-called, it is usually of an utterly chaotic nature.... Their sexual relationships are a kaleidoscope of ephemeral couplings, often with abandoned offspring as a result, motivated by an immediate need for sexual release and often complicated by primitive egotistical possessiveness leading to violence and conflict. Their emotional life is intense but shallow, and their interactions with others governed by power rather than any kind of principle. Life is a matter of doing what you can get away with.
Their economic prospects are poor. They are unskilled in countries in which the demand for unskilled labour is limited. [....] Any work that they do will be repetitive and dull; and while a man might once have derived satisfaction from performing a menial task well, from leading a life of modest usefulness to others, this is not an age when such humility is very common.
In large part, this is because people live to a quite unprecedented degree in the virtual world of so-called popular culture. From the very earliest age, their lives are saturated with images of celebrities, whose attainments are often modest but who have been whisked by good fortune into a world of immense and glamorous luxury. This comparison with their own surroundings, squalid if not poor in the literal sense, is not only stark but painful, and is experienced as an open wound into which salt is continually rubbed. It is also experienced as an injustice, for why should people with tastes and accomplishments not so very different from their own lead a life of fairy-tale abundance? The injustice of which they feel themselves to be the victim reduces any lingering inhibitions against causing harm to society, which means in practice individual members of society. Crime ceases to be crime, but is rather restitution or justified revenge. And the fact that the abundance they so desire is itself empty and leads to dissatisfaction and boredom entirely escapes them.
The end result is that, while profoundly dissatisfied with their present lot, they do not have ambitions towards which they might actually work in a constructive fashion, but daydreams, in which every thing is solved at once in a magical way, daydreams from which the emergence into reality is always painful. Any aid to the perpetuation of the state of daydreaming (or reverie, as Coleridge and De Quincy call it) is therefore greatly appreciated.”
Although I might quibble with this or that, I find myself agreeing in the main with Dalrymple/Daniels despite our being at opposite ends of the contemporary political spectrum, importantly, my explanatory causal chain would commence with capitalism, its foremost ideologies as well as its more elusive yet widely diffused cultural ethos.
* * *
Whatever one thinks of Alcoholics Anonymous (AA), it clearly has a skeletal Christian structure (involving, first and foremost, reference to God or a ‘Higher Power’) although it’s officially non-denominational (atheists and agnostics have sometimes formed their own AA groups). Its members often become fervent believers in and missionaries for its methods. It is quite interesting and revealing that “AA sprang from the Oxford Group, a non-denominational, altruistic movement modeled after first-century Christianity. Some members founded the group to help in maintaining sobriety. ‘Grouper’ Ebby Thacher and former drinking buddy approached Wilson saying that he had ‘got religion,’ was sober, and that Wilson could do the same if he set aside objections and instead formed a personal idea of God, ‘another power’ or ‘higher power.’” The Christianity of the Oxford Group—or Buchmanites—was most evident in its revival of the notion of a “public and detailed confession, regarded as the first step toward a sincere conversion....” (of Protestant provenance).4
Whether intentionally or not, AA popularized the “disease” concept of alcoholism, a term that can be understood variously on the order of a biomedical affliction or in more metaphorical terms, with AA members preferring the biomedical interpretation (a conclusion based on anecdotal and other evidence), now depicted as an “addictive disease.” Efforts to assess its effectiveness have been quite difficult if not controversial, yet it appears safe to conclude that AA is “as effective as other abstinence-based support groups.” As Fingarette wrote in Heavy Drinking, “the addict has a problem, but it is not a medical one: he does not know how to live. And on this subject the doctor has nothing, qua doctor, to offer.” But whatever its dispositional commitment to a “disease model” of addictive behavior (discussed at some length in Fingarette’s book), AA does provides its members with an attenuated Christian worldview which addresses some of the “how to live” questions. It is thus not surprising that most of its members reside in North America (although there are members in well over a hundred countries around the world). Intriguingly, the well-known confessional component of AA is entwined with several therapeutic features of AA’s collective self-help group psychological dynamics, including the “instillation of hope, imparting relevant information, group cohesiveness, and catharsis.”
In a fairly decent research article (as such things go) on psychological concepts and therapeutic factors in AA aimed at those involved in different forms of therapy and counseling,5 one of the therapeutic modalities discussed is Alfred Adler’s (somewhat misleadingly titled) “individual psychology” model of psychotherapy. Mention is made AA members putting faith in something beyond themselves, acknowledging that some members will not be comfortable with “God-talk”: “For those who have had their struggles with organized religion [Is that redundant? Is not religion by definition more or less ‘organized’?] or concepts of God, it might be suggested that ‘God’ means [or can be replaced by] ‘good orderly direction,’ or ‘group of drunks’ [communal fellowship, if you will]. If a newly sober member has no interest in God, or the concept of one [in other words, they are agnostic or atheistic], it may be suggested to simply make the group their Higher Power.” From a psychoanalytic group psychology (be it Freudian or Kleinian) perspective, that does not seem a healthy alternative, even if communal fellowship or simply a sense of community among members is a valuable desideratum (as it was with Adler).
It has been said that today AA does not require a belief in God or even a “higher power”: “The only requirement for membership is a desire to stop drinking.” While welcome or inviting as stated, it is questionable outside groups whose members are avowedly agnostics or atheists, for while it may not be a de jure requirement, it seems by most accounts to be a de facto requirement. And so one thing I found (thus far) troubling or simply puzzling was an example used by our authors to illustrate the putative fact that religious faith or spiritual belief is not necessary for membership:
“On numerous occasions, one of the three authors has participated in the Mustard Seed AA Meeting in Chicago, where newcomers are given a note card with a mustard seed taped to it and advised it only takes that much faith to begin. In accordance with AA’s anonymity, the specific author is not identified.”
This is a somewhat disingenuous ritual or practice that hardly seems innocent or innocuous with respect to religion, in particular, Christianity! Surely our authors have heard of the “mustard seed parable” in the Gospels (often paired with ‘the leaven’ parable) of the New Testament:
Cf. the Gospel of Thomas (also known as the Coptic Gospel of Thomas) an extra-canonical “sayings” gospel, which is the most pellucid with regard to meaning (Thomas 20): “The followers said to Jesus, ‘Tell us what heaven’s kingdom is like.’ He said to them, ‘It is like a mustard seed, it is the smallest of all seeds, but when it falls on prepared soil, it produces a large plant and becomes a shelter for birds of heaven.’”
Hermeneutically, theologically, and doctrinally speaking, the authenticity of this parable “is not in question,” writes Ann Wierzbicka in What Did Jesus Mean? (Oxford University Press, 2001). Alongside the parable of the leaven, “nobody doubts that both these short and rather cryptic parables originated with Jesus. Furthermore, nobody doubts that they occupy an important place in Jesus’ teaching and that in fact they contain ‘one of the central elements of the preaching of Jesus.’”
The parable of the mustard seed contains a number of thematic images and metaphors: “the hiddenness of the seed (proverbially the smallest of all seeds); the inevitability of its growing (once planted); the consequent certainty of the outcome; the mysteriousness of this growth and its lack of dependence on human effort[!]; the amazing transformation of a tiny seed into large shrub; the shift from something hidden and imperceptible to something visible, tangible, and indeed spectacular.”
In conclusion, while there is nothing unsettling, be it religiously, spiritually, or psychologically speaking about this parable from the viewpoint of a Christian worldview or lifeworld, it decidedly is not a perspicuous illustration of AA’s refusal to require belief in God or a faith in a Higher Power sans the influence or presence of Christian faith or sentiment! In fact, the note card with the mustard seed attached to it would seem to be part of preparing the soil for planting the seed, and thus the Christian hopes that follow therefrom! If a prospective or new member asks a veteran, why a mustard seed? Or what is the meaning of this mustard seed note? It hardly seems possible that any forthcoming explanation will be bereft of some sort of Christian take on things along the lines of our introduction to the parable above.
I end on a personal note: my few encounters with AA members have not been encouraging (I’ll spare you the details), and my best friend from high school attended meetings for some time but eventually relapsed. But my experiences are idiosyncratic and anecdotal. If lives have been saved and members have found some relief from what ails them through AA, I can hardly begrudge them, for they have found something that “works,” at least in salient respects, for them. For a more objective assessment, see the conclusion to the above article on the history of AA’s work with alcohol addiction, one that credits AA with “revolutioniz[ing] humanity’s understanding and perspective on addition,” noting the “millions of individuals who have benefitted from the therapeutic elements in the support group model of counseling theory.” Finally, “AA has both relied upon, and established, numerous and powerful therapeutic factors that have exceeded its inception … [and whatever the origins of these therapeutic factors], “the contributions of the AA model on the sobriety and mental health of its members cannot be underestimated.”
Notes
1. Here are three contemporary examples of secular Natural Law theorizing that remain unavoidably metaphysical: The first is from Larry May in the context of his larger moral and legal treatment of international criminal law (especially jus cogens norms) and the “crimes against humanity” in particular. May proffers a “morally minimalist” conception of natural law that is beholden to H.L.A. Hart’s belief in what Hart called “a minimum content of natural law” as well as Thomas Hobbes’s moral and political philosophy in his book, Crimes Against Humanity: A Normative Account (Cambridge University Press, 2005 [see the index entry on ‘natural law’]). By way of making his brief argument eminently plausible, please see the discussion of same in S.A. (Sharon) Lloyd’s brilliant analysis, Morality in the Philosophy of Thomas Hobbes: Cases in the Law of Nature (Cambridge University Press, 2009). Second, May has essentially turned this into a book-length argument in Limiting Leviathan: Hobbes on Law and International Affairs (Oxford University Press, 2013). Finally, see the essay, “The Open Texture of Natural Law,” by Raghavan Iyer from his book, Parapolitics: Toward the City of Man (Oxford University Press, 1979): 50-60.
2. Please see the entry on “li” in my study guide for Confucianism. The term can embrace notions of ritual, rites, etiquette, customs, conventions, social norms, and propriety. Culturally and psychologically, involving both formal and informal methods of education in a society, li has everything to do with what the philosopher and writer Iris Murdoch once characterized as the “proper directing of our modes of attention.”
3. Please see Amartya Sen’s discussion of “relative poverty” in the chapter, “Conceptualizing and Measuring Poverty,” in David B. Grusky and Ravi Kanbur, eds. Poverty and Inequality (Stanford University Press, 2006): 30-46.
4. Stanley W. Jackson, Care of the Psyche: A History of Psychological Healing (Yale University Press, 1999): 144. Jackson devotes an entire chapter in his book to doctrines and practices of “Confession and Confiding.”
5. David A. Stone, John A. Contch, and Joshua D. Francis, “Therapeutic Factors and Psychological Concepts in Alcoholics Anonymous,” Journal of Counselor Practice 8(2): 120-135, 2017.
Recommended Reading (additional relevant titles are found in the three bibliographies below)
Relevant Bibliographies
Posted at 06:32 PM in Patrick S. O'Donnell | Permalink | Comments (0)
Plenty Coups (Crow: Alaxchíia Ahú, ‘many achievements;’ 1848 – 1932) was the principal chief of the Crow Nation (‘Apsáalooke’) and a visionary leader.*
Wise people, one might argue, possess epistemic self-confidence, yet lack epistemic arrogance. Wise people tend to acknowledge their fallibility, and wise people are reflective, introspective, and tolerant of uncertainty. Any acceptable theory of wisdom ought to be compatible with such traits.
[Most theories of wisdom] ... require a wise person to have knowledge of some sort. All of these views very clearly distinguish knowledge from expertise on a particular subject. Moreover, all of these views maintain that wise people know ‘what is important.’ The views differ, for the most part, over what it is important for a wise person to know, and on whether there is any behavior, action, or way of living, that is required for wisdom.
Wisdom is not just one type of knowledge, but diverse. What a wise person needs to know and understand constitutes a varied list: the most important goals and values of life – the ultimate goal, if there is one; what means will reach these goals without too great a cost; what kinds of dangers threaten the achieving of these goals; how to recognize and avoid or minimize these dangers; what different types of human beings are like in their actions and motives (as this presents dangers or opportunities); what is not possible or feasible to achieve (or avoid); how to tell what is appropriate when; knowing when certain goals are sufficiently achieved; what limitations are unavoidable and how to accept them; how to improve oneself and one's relationships with others or society; knowing what the true and unapparent value of various things is; when to take a long-term view; knowing the variety and obduracy of facts, institutions, and human nature; understanding what one's real motives are; how to cope and deal with the major tragedies and dilemmas of life, and with the major good things too. — Robert Nozick
* * *
“Wisdom doesn’t require method,” said one of my wiser friends in an online post. In the sense that our acquisition of knowledge seems to require various kinds of method that act as means to the ends of knowing, this helps distinguish knowledge (and information or data as well) from wisdom. Yet I am ambivalent about the truth of this claim, one reason hinging on precisely what method is intended to mean here. In addition, and conceptually speaking, there are several different characterizations of wisdom which invoke somewhat different properties or qualities for distinguishing wisdom from mere knowledge (and we have the idea that a person, like Socrates, is wise because he disavows any claims to wisdom while nonetheless exemplifying its properties; this may perhaps mean only that epistemic humility is one quality of the wise person, as he or she is aware, unlike the rest of us, of the extent of his or her ignorance). If we look at people we consider wise, especially, for example, those in religious and philosophical traditions around the world, there existed more or less similar methods (generally speaking, ‘ascetic’ exercises or practices [from the Greek áskēsis]) which appear to have been essential to their individuation, self-actualization, or self-realization (used here rather broadly so as not to imply any one particular religious or psychological or philosophical traditions or school of thought) which led eventually to wisdom. By this I mean what the philosopher John Cottingham calls “spiritual exercises,” which overlap with what has been termed “therapeia” or simply therapy in both philosophical and religious worldviews (the latter being those in which philosophy plays a prominent part in the tradition, as we see in some forms of Indic philosophy, be it from Hinduism or Buddhism, as well as in Daoism and Confucianism, as well as early Islam and Sufism more generally).
These methods of course do not guarantee one will arrive at wisdom, thus we might say they are necessary but not sufficient conditions for same. It may also be true that those who do not practice spiritual exercises or philosophical therapy may, on occasion, be wise or display wisdom (such as is captured in aphorisms, maxims, and proverbs), but I believe these to be exceptions to the rule. In ancient Greek philosophy there are indeed “methods” of this kind (which often exemplify the ability to avoid the folly of ‘willing what cannot be willed’), hence the title of John M. Cooper’s book, Pursuits of Wisdom: Six Ways of Life in Ancient Philosophy from Socrates to Plotinus (Princeton University Press, 2012). Thus we speak of individuals exemplifying wisdom (indeed, narratives of such persons are not hard to find in religious and non-religious or philosophical worldviews around the planet). Having said all of this, I suspect the most compelling way we identify embodiments or incarnations of wisdom in the first instance is through what is known as a “direct reference” theory of the Good (and wise!) wherein we anchor our moral concepts and epistemic virtues in actual exemplars (although fictional characters might be used here as well), thus we say that a wise or good person is like that, referring to the character, life, or actions of a particular individual (see the Zagzebski title below). What makes for wisdom may differ in particulars across worldviews and traditions, but it would appear that, at least in principle, cross-cultural recognition of wisdom is still possible, there being a strong family resemblance here, although such recognition of course presumes prior knowledge of the respective societies or cultures and their worldviews. At the very least, it appears that wisdom is more often than not linked to some conception of what it means to live well, to live in the light of the Good. As for wisdom’s possible intrinsic ties to eudaimonia and happiness, it would seem much depends on how we fill out these concepts, although we might be justified in seeing these states as by-products or incidental benefits of being wise, thus not necessarily as constitutive of wisdom as such.
* On Plenty Coups, please see the Jonathan Lear title below. One might argue that Plenty Coups was wise both before (embodying the ideals of the Crow way of life or culture and especially the ideal of a Crow chief) and after (symbolized by Plenty Coup leaving his war bonnet and coup stick on a sarcophagus during a ceremonial burial in Washington of the Unknown Soldier in 1921) the traditional martial and nomadic Crow way of life ended. I trust by the end of Lear’s book one will learn why and how Plenty Coup exemplified wisdom (and what today we term transformational leadership) “after the buffalo went away” and the Crow people were confined to a reservation. As for the spiritual and moral psychological “methods” practiced by Plenty Coup, these would include his training as a young warrior as well as the practice of “dream-vision” and interpretation:
“What is striking about Plenty Coup’s dream—and the interpretation the tribe gave to it—is that it was used not merely to predict a future event, it was used by the tribe to struggle with the intelligibility of even that lay on the horizon of their ability to understand. Dreams were regularly used by the Crow to predict the future. People would, for instance, wait for a vision in a dream to them it was a propitious time to go into battle. But young Plenty Coup’s dream was of a different order. It did not predict any particular event, but the change of the world order, It was prophetic in the sense that the tribe used in to face up to a radically different future.” — Jonathan Lear
Recommended (in addition to the Cooper title above)
“I hear the white men say there will be no more war. But this cannot be true. There will be other wars. Men have not changed, and whenever they quarrel they will fight, as they have always done.” — Plenty Coups
Posted at 04:39 PM in Patrick S. O'Donnell | Permalink | Comments (0)
A Facebook post (now posted here earlier today) by my dear friend Steve Shiffrin moved me to comment of one aspect of his post, to which is added a preliminary attempt to explain this phenomenon in group psychological terms that draw from the wellspring of psychoanalysis.
Replacement “theory” is not really a theory, and we accord it sociological and political (or simply scientific) plausibility if not credibility should we use this term indiscriminately and or uncritically. It is part of a rigid if not closed ideological belief system that is racist and implicitly or explicitly cleaves to notions of white supremacy. This idea is part myth, part phantasy (with a history: according to a recent op-ed in the LA Times by Jason Stanley and Federico Finchelstein, white replacement phantasy and ‘its ideological predecessors have been central to fascist movements in Europe, Asia, the United States and elsewhere’) and invites if not imbibes both illusions and delusions. In Freudian terms, it is born of primary mental functioning and processes (hence the disregard of logic and fundamental forms of reasoning, tolerating contradictions if not incoherence, the use of densely symbolic imagery and indirect and distorted forms of representation, etc.), rather than secondary process psychic functioning which respects grammatical and syntactical rules and denotative symbols, giving pride of place to reasoning and consistent links between belief and desire (in general, the ego predominates). We need, I believe, to examine this ideological myth in psychological terms, in particular, as it relates to group psychology and forms of collective belief and identity, including nationalism (in this instance, xenophobic forms of same based on mythic narratives). Trump’s presidency and ongoing political celebrity out of office has tapped into the darker forces of an American psyche, if you will, forces that are regressive in the worst sense (i.e., it is not in service of the ego or one with sublimation, as we find, say, in artistic creativity). In the words of Thomas A. Singer,
“Donald Trump uncovered a huge sinkhole of dark, raw emotions in the national psyche for all of us see [heretofore they did not find sufficient legitimation or encouragement among powerful politicians but existed on the margins of society as well as being, on the level of the individual, more or less ‘repressed,’ albeit with occasional exceptions in the form of crime sprees, acts of terrorism, mass shootings, flagrant violations of social norms, etc.]. Rage, hatred, envy, and fear surfaced in a forgotten, despairing white underclass [to be honest, it was not just among this group, but in the precarious lower and middle classes as well] who had little reason to believe that the future would hold the promise of a brighter life-affirming purpose. Trump tapped into the negative feelings many Americans have about the things [presumably] that we are supposed to be compassionate [or at minimum, tolerant] about—ethnic, racial, gender, [sexual], and religious differences. What a relief, so many must have thought, to hear a politician speak their unspoken [at least in polite society, as we say] resentments and express their rage.”
More specifically if not etiologically, in the words of Elizabeth Mika, Trump’s malignant narcissism “is the main attraction for his followers, who project their hopes and dreams onto him.” If we consider Trump a would-be tyrant, “[t]hrough the process of identification, the tyrant’s followers absorbed his omnipotence and glory and imagine themselves as powerful as he is [this is a vicarious or substitute expression of his power, which some will imagine themselves to actually possess, hence the ‘acting out’ we have witnessed in recent mass shootings by young white men], the winners in the game of life [if whites become a minority, they will see themselves as ‘losers’]. This identification [is a temporary salve for] the follower’s narcissistic wounds … [which] tends to shut down their [ability to] reason and [their exercise of] conscience, allowing them to engage in immoral and criminal behaviors with a sense of impunity engendered by this identification. Without the support of his narcissistic followers, who seen in the tyrant a reflection and vindication of their long-nursed dreams [nightmares for the rest of us] of glory, the tyrant would remain a middling nobody.” This is not revelatory, psychoanalytically speaking, as Erich Fromm earlier wrote of such phenomena in several books, including Escape from Freedom (1941) and The Heart of Man: Its Genius for Good and Evil (1964).
White rage, envy, and resentment can be characterized in terms of what the sociologist Michael Kimmel terms “aggrieved entitlement,” that is, a “narcissistic mixture of elevated expectations, resentments, and desire for revenge on specified targets and/or society in general for not meeting those expectations” [see his 2013 book, Angry White Men: American Masculinity at the End of an Era]. Existing structures and exercises of white power and privilege constitute one such set of “elevated expectations,” insofar are thought to deserved or justified and thus not open to questioning or criticism.
Narcissistic collusion between Trump (or any would-be tyrant or Trump-wannabe) and his followers typically involves “good-sounding” as well as “openly unrealistic, bordering on delusional—promises to his supporters” that he usually has no intention—or even ability—of fulfilling. “He holds his supporters in contempt, as he does ‘weaker’ human beings in general, and uses them only as props in his domination- and adulation-oriented schemes.” As we saw during and after Trump’s presidency, the “narcissistic collusion between the [would-be] tyrant and his supporters is also driven by the latter’s need for revenge, for the tyrant is always chosen to perform this psychically restorative function: to avenge the humiliations (narcissistic wounds) of his followers and punish those who afflicted them.” The target of such vengeance is based on what Freud brilliantly defined as the “narcissism of small differences,” meaning “the Other” is anyone who does not share the ideological worldview of white supremacy. This is where, notoriously, scapegoating (displacement and projection) enters the picture, as Trump’s supporters act out their narcissistic revenge in one way or another:
“The tyrant [‘would-be,’ in Trump’s case] and his followers typically choose as vessels for their negative projections and aggression the members of society who are not just different but weaker. The tyrant fuels the aggression [and violence] in order to solidify his power (while simultaneously increasing the vicarious power of his followers) but also to deflect it from himself, shield his own narcissism, and repair his own narcissistic injuries dating to his childhood.” Further psychological analysis speaks to the tyrant as a father-protector and messianic figure (in which case the theology and symbolism of Christianity becomes relevant).
Two books I have found quite helpful by way of providing a group psychological explanation largely within the parameters of psychoanalytic theory:
Relevant Bibliographies
Posted at 07:00 AM in Patrick S. O'Donnell | Permalink | Comments (0)
Most Americans prefer the myopic life found aboard the Titanic, that is, the psychological states of malignant narcissism and present hedonism, to the imaginative utopian foresight, difficult planning and arduous work necessary to rebuild Noah’s Ark, which assumes an appreciation of the capacities and powers made available to us through individual and collective “self-binding” and constraints (Jon Elster, among others). Perfervid U.S./MAGA nationalist identity is irrationally yet inextricably intertwined with the developmentally arrested qualities of a cultural adolescence (Werner Sombart*) stuck in the muck and mire of a hyper-technological and high finance capitalist ethos that is increasingly susceptible to moments of violent regression steeped in self-deception, states of denial and the hallucinatory phantasies of illusion and delusion. Which brings us to the chapter title, ”Who Will Build the Ark?,” in Mike Davis’s book, Old Gods, New Enigmas: Marx’s Lost Theory (Verso, 2018), a small snippet from which I quote:
“Scholarly research has come late in the day to confront the synergistic possibilities of peak population growth, agricultural collapse, abrupt climate change, peak oil, and in some regions, peak water, and the accumulated penalties of urban neglect. If investigations by the German government, Pentagon, and CIA into the national-security implications of a multiply determined world crisis in the coming decades have had a Hollywoodish ring, it is hardly surprising. As a 2007/2008 UN Human Development Report observed, ‘There are no obvious historical analogies for the urgency of the climate change problem.’ While paleoclimatology can help scientists anticipate the non-linear physics of warming Earth, there is no historical precedent or vantage point for understanding what will happen in the 2050s, when a peak species population of 9 to 11 billion struggles to adapt to climate chaos and depleted fossil energy. Almost any scenario, from the collapse of civilization to a new golden age of fusion power, can be projected onto the strange screen of our grandchildren’s future.”
For Davis’s visionary guide to urgent political action, in effect, answering the question posed in the aforementioned chapter title, please read the book!
* As summarized here by the late Raghavan Iyer (and in important respects updated in the works of Christopher Hedges as well as psychoanalysts on the Left with anthropological and sociological sensibilities), this entails the dispositionally constituted
“… tendency to mistake bigness for greatness; the influence on the inner workings of the mind of the quantitative valuation of things, the connection between success, competition [in which there is a meritocratic conception of ‘winners’ and ‘losers,’] and sheer size; the tendency to regard the speediest achievements [be they in hyper-technology or shameless high finance shenanigans] as the most valuable ones; the connection between megalomania, mad hurry, and record-breaking [in conventional mass media or more ubiquitously the virtual reality of smart phones and social media]; the attraction of novelty [which psychological and sociologically suffuses the wider culture]; the habit of hyperbole [this habit has degenerated into a penchant for utter disregard for what is or might be true and the systematic spread of the crassest ideological propaganda and myths; as illusions, delusions, and phantasies make for what Fromm called the ‘pathology of normalcy’ and Hedges calls the ‘triumph of spectacle’]; the love of sensationalism [be it in newspapers, or on TV, computer screens, and smartphones]; the fashion in ideas [consider the many American academic intellectuals who fawn over French intellectual fads] as well as clothes; and the consciousness of superiority [on all fronts, but especially the economic, political, and military] that is [in the end or all things considered] merely an expression of weakness [or fear, insecurity, anxiety, and so forth].” — Raghavan Iyer, Parapolitics: Toward the City of Man (Oxford University Press, 1979): 307.
Some Relevant Bibliographies (embedded links)
Posted at 12:21 PM in Patrick S. O'Donnell | Permalink | Comments (0)
It appears the Tariq Ali’s book, Churchill: His Times, His Crimes (Verso, 2022), is doing us the same invaluable public service that Christopher Hitchens, Greg Grandin, and William Shawcross (and some others) earlier performed with their works on Kissinger.* Of course the Right is apoplectic when its idealized and mythic portraits are brought back to earth, when people learn these men were morally and legally responsible for barbaric, inhumane and cruel policies, decisions and actions during times of both war and putative peace, as millions needlessly suffered and died under their imperious leadership, as they became political exemplars of some of the worst character vices of heart and mind.
* Please see:
Posted at 11:43 AM in Patrick S. O'Donnell | Permalink | Comments (0)
First, I’d like to draw your attention to Just Security’s post on a “Model Indictment for the Crime of Aggression Committed against Ukraine.” Alas, this would be even more powerful had the actions of state leaders, for example, in the U.S., the UK, France, Israel, Syria, and Saudi Arabia, led to charges for this (i.e., crime of aggression) and/or related war crimes at various times and places beginning with WWII (a just war on the Allied side did not excuse the commission of war crimes, violations of the Geneva Conventions, etc.). But because two wrongs do not make a right, crimes under the aegis of international criminal law are no less pellucid in this instance. We hope and pray for the day when international criminal law is not reducible to “victor’s justice” (Danilo Zolo, among others). We need a more parliamentary or democratic model of the U.N., one in which the most powerful nations are not allowed to dominate (as is the case at present with the current Security Council model). I suppose it is fair to say that international criminal law is still, to a significant extent, a utopian endeavor, that is, if we believe in the moral and legal value of criminal law (a subject for another day). It is “aspirational” or utopian in large measure owing to “victors’ justice,” to U.S. hypocrisy on war crimes, and the cynical, rhetorical, and crass political reduction of international criminal law violations to politicized propaganda, as detailed in a NYRB essay by Fintan O’Toole (more from which follows below), all of which renders diplomacy and a negotiated end to the war in Ukraine that much more difficult if not unlikely:
“It is hard to overstate how important it is that the war crimes that have undoubtedly been committed already in Ukraine—and the ones that are grimly certain to be inflicted on innocent people in the coming weeks and months—not be understood as ‘a flexible instrument in the hands of politicians.’ They must not be either shaped around or held hostage by ‘policy and expediency.’ This is a question of justice. Those who have been murdered, tortured, and raped matter as individuals, not as mere exemplars of Putin’s barbarity. The desire to prosecute their killers and abusers stems from the imperative to honor that individuality, to restore insofar as is possible the dignity that was stolen by violence.
But it is also, as it happens, a question of effectiveness. If accusations of Russian war crimes are seen to be instrumental rather than principled, they will dissolve into ‘whataboutism:’ Yes, Putin is terrible, but what about… Instead of seeing a clean distinction between the Western democracies and Russia, much of the world will take refuge in a comfortable relativism. If war crimes are not universal violations, they are merely fingers that can point only in one direction—at whomever we happen to be in conflict with right now. And never, of course, at ourselves.
Even before Putin launched his invasion on February 24, the Biden administration seems to have had a plan to use Russian atrocities as a rallying cry for the democratic world. That day, The New York Times reported that ‘administration officials are considering how to continue the information war with Russia, highlight potential war crimes and push back on Moscow’s propaganda.’ This was not necessarily cynical—Putin’s appalling record of violence against civilians in Chechnya and Syria and plain contempt for international law made it all too likely that his forces would commit such crimes in Ukraine.
But this anticipation of atrocities, and deliberation about how to make use of them, underlines the administration’s perception of the accusation of war crimes as a promising front in the ideological counterattack against Putin. As early as March 10, well before the uncovering of the atrocities at Bucha, the US ambassador to the United Nations, Linda Thomas-Greenfield, told the BBC that Russian actions in Ukraine ‘constitute war crimes; there are attacks on civilians that cannot be justified … in any way whatsoever.’
A week later, and still a fortnight before the first reports from Bucha, Biden was calling Putin, in unscripted remarks, a ‘war criminal.’ At that point, he in fact seemed a little unsure about the wisdom of making the charge—initially, when asked if he would use the term, he replied ‘no,’ before asking reporters to repeat the question and then replying in the affirmative. Significantly, Biden was responding not to ground-level assaults by Russian troops on civilians but to the shelling of Ukrainian cities. This may perhaps explain his hesitancy: civilian casualties from aerial assaults by drones, rockets, and bombs are a sore subject in recent US military history. [emphasis added] Having crossed the line and made this charge directly, Biden had little choice but to raise the stakes when the terrible images from Bucha were circulated. First, on April 4 he went beyond deeming Putin a criminal by calling specifically for him to face a ‘war crime trial.’ Then on April 12 he pressed the nuclear button of atrocity accusations: genocide. ‘We’ll let the lawyers decide, internationally, whether or not it qualifies [as genocide], but it sure seems that way to me.’ He also referred to an unfolding ‘genocide half a world away,’ clearly meaning in Ukraine.
Biden did so even though his national security adviser, Jake Sullivan, had told a press briefing on April 4: ‘Based on what we have seen so far, we have seen atrocities, we have seen war crimes. We have not yet seen a level of systematic deprivation of life of the Ukrainian people to rise to the level of genocide.’ Sullivan stressed that the determination that genocide had been committed required a long process of evidence-gathering. He cited the recently announced ruling by the State Department that assaults on the Rohingyas by the military in Myanmar/Burma constituted genocide. That conclusion was reached in March 2022; the atrocities were committed in 2016 and 2017. The State Department emphasized in its announcement that it followed ‘a rigorous factual and legal analysis.’
It is obvious that no such analysis preceded Biden’s decision to accuse Putin of genocide. When asked about genocide on April 22, a spokesperson for the UN High Commissioner for Human Rights said, ‘No, we have not documented patterns that could amount to that.’ Biden’s careless use of the term is all the more damaging because, however inadvertently, it echoes Putin’s grotesque claim that Ukraine has been committing genocide against Russian speakers in Donbas.
The problem with all of this is not that Biden is wrong but that it distracts from the ways in which he is right. This overstatement makes it far too easy for those who wish to ignore or justify what the Russians are doing to dismiss the mounting evidence of terrible crimes in Ukraine as exaggerated or as just another battleground in the information war. In appearing overanxious to inject ‘war criminal’ into the international discourse about Putin and making it seem like a predetermined narrative, the US risked undermining the very stark evidence for that conclusion. By inflating that charge into genocide, it substituted rhetoric for rigor and effectively made it impossible for the US to endorse any negotiated settlement for Ukraine that leaves Putin in power: How can you make peace with a perpetrator of genocide? Paradoxically, it also risked the minimization of the actual atrocities: If they do not rise to the level of the ultimate evil, are they ‘merely’ war crimes?
What makes these mistakes by Biden truly detrimental, however, is that the moral standing of the US on war crimes is already so profoundly compromised. The test for anyone insisting on the application of a set of rules is whether they apply those rules to themselves. It matters deeply to the struggle against Putin that the US face its record of having consistently failed to do this.”
Dia Azzawi’s “Sabra Shatila,” created in response to the 1982 massacre of civilians in Palestinian refugee camps.
That out of the way, I’d like to share the heart and soul of O’Toole’s extremely important essay, at least for those of us who believe in the moral and legal value of international criminal law and justice.
“Our Hypocrisy on War Crimes,” The New York Review of Books, May 26, 2022
By Fintan O’Toole
[….] “… [T]he US has been, for far too long, fatally ambivalent about war crimes. Its own history of moral evasiveness threatens to make the accusation that Putin and his forces have committed them systematically in Ukraine seem more like a useful weapon against an enemy than an assertion of universal principle. It also undermines the very institution that might eventually bring Putin and his subordinates to justice: the International Criminal Court (ICC).
There have long been two ways of thinking about the prosecution of war crimes. One is that it is a universal duty. Since human beings have equal rights, violations of those rights must be prosecuted regardless of the nationality or political persuasion of the perpetrators. The other is that the right to identify individuals as war criminals and punish them for their deeds is really just one of the spoils of victory. It is the winner’s prerogative—a political choice rather than a moral imperative.
Even during World War II, and in the midst of a learned discussion about what to do with the Nazi leadership after the war, the American Society of International Law heard from Charles Warren, a former US assistant attorney general and a Pulitzer Prize–winning historian of the Supreme Court, that ‘the right to punish [war criminals] is not a right conferred upon victorious belligerents by international law, but it flows from the fact of victory.’ Warren quoted with approval another eminent American authority, James Wilford Garner, who had written that ‘it is simply a question of policy and expediency, to be exercised by the victorious belligerent or not.’ ‘In other words,’ Warren added, ‘the question is purely political and military; it should not be treated as a judicial one or as arising from international law.’ As the Polish lawyer Manfred Lachs, whose Jewish family had been murdered by the Nazis, wrote in 1945, this idea that the prosecution of war crimes is ‘a matter of political expediency’ would make international law ‘the servant of politics’ and ‘a flexible instrument in the hands of politicians.’ [….]
On November 19, 2005, in the Iraqi town of Haditha, members of the First Division of the US Marines massacred twenty-four Iraqi civilians, including women, children, and elderly people. After a roadside bomb killed one US soldier and badly injured two others, marines took five men from a taxi and executed them in the street. One marine sergeant, Sanick Dela Cruz, later testified that he urinated on one of the bodies. The marines then entered nearby houses and killed the occupants—nine men, three women, and seven children. Most of the victims were murdered by well-aimed shots fired at close range.
The official US press release then falsely claimed that fifteen of the civilians had been killed by the roadside bomb and that the marines and their Iraqi allies had also shot eight ‘insurgents’ who opened fire on them. These claims were shown to be lies four months later, when Tim McGirk published an investigation in Time magazine. When McGirk initially put the evidence—both video and eyewitness testimony—to the marines, he was told, ‘Well, we think this is all al-Qaeda propaganda.’ This was consistent with what seems to have been a coordinated cover-up. No one in the marines’ chain of command subsequently testified that there was any reason to suspect that a war crime had occurred. Lieutenant Colonel Jeffrey Chessani, the battalion commander, was later charged with dereliction of duty for failing to properly report and investigate the incident. Those charges were dismissed. Charges against six other marines were dropped, and a seventh was acquitted. Staff Sergeant Frank Wuterich, who led the squad that perpetrated the killings, was demoted in rank to private and lost pay, but served no time in prison. [….]
How does the ‘tragic incident’ at Haditha differ from the murders of civilians by Russian forces in Ukraine? There are some important distinctions. Unlike in Russia now, the US had media organizations sufficiently free and independent to be able to challenge the military’s account of what happened. It had elected politicians who were willing to condemn the atrocity—in 2006, for example, Joe Biden suggested that then defense secretary Donald Rumsfeld should resign because of Haditha. Senior military commanders, including Mattis, were obviously repelled by the atrocity. Putin ostentatiously decorated the Sixty-Fourth Separate Motorized Rifle Brigade for its ‘mass heroism and courage’ after that unit had been accused by Ukraine of committing war crimes in Bucha. There was no such official endorsement of the First Marine Division. These differences matter—false equivalence must be avoided.
Yet uncomfortable truths remain. One of the most prestigious arms of the US military carried out an atrocity in a country invaded by the US in a war of choice. No one in a position of authority did anything about it until Time reported on it. No one at any level of the chain of command, from senior leaders down to the soldiers who did the killings, was held accountable. And such minor punishments as were imposed seem to have had no deterrent effect. In March 2007 marines killed nineteen unarmed civilians and wounded fifty near Jalalabad, in Afghanistan, in an incident that, as The New York Times reported at the time, ‘bore some striking similarities to the Haditha killings.’ Again, none of the marines involved or their commanders received any serious punishment.
Perhaps most importantly, nothing that happened in these or other atrocities in Iraq or Afghanistan changed the way that deliberate acts of violence against foreign civilians are presented in official American discourse. The enemy commits war crimes and lies about them. We have ‘tragic incidents,’ ‘tragic mistakes,’ and, at the very worst, a loss of discipline. When bad things are done by American armed forces, they are entirely untypical and momentary responses to the terrible stresses of war. The conditioning that helps make them possible, the deep-seated instinct to cover them up, and the repeated failure to bring perpetrators to justice are not to be understood as systemic problems. Nowhere is American exceptionalism more evident or more troubling than in this compartmentalizing of military atrocities.
The only way to end this kind of double standard is to have a single, supranational criminal court to bring to justice those who violate the laws of war—whoever they are and whatever their alleged motives. This idea has been around since 1872, when it was proposed by Gustave Moynier, one of the founders of the International Committee of the Red Cross. It seemed finally to be taking shape in the aftermath of World War II and the Holocaust, when a statute for an international criminal court was drafted by a committee of the General Assembly of the UN. This effort was, however, stymied by the USSR and its allies. In the 1990s the combination of the end of the cold war and the hideous atrocities committed during the breakup of Yugoslavia and in Rwanda gave the proposal a renewed impetus. This led to the conference in Rome in June and July 1998, attended by 160 states and dozens of nongovernmental organizations, that finally adopted the charter for the ICC. This statute entered into force in July 2002, and the ICC began to function the following year.
Of the five permanent members of the UN Security Council, one (China) opposed the adoption of the ICC’s statute. Two (the United Kingdom and France) supported it and fully accepted its jurisdiction. That leaves two countries that ended up in precisely the same contradictory position: Russia and the US. Both signed the Rome statute—Russia in September 2000, the US three months later. And both then failed to ratify it. Putin, presumably because of international condemnation of war crimes being committed under his leadership in Chechnya, declined to submit it to the Duma in Moscow. George W. Bush effectively withdrew from the ICC in May 2002, following the US-led invasion of Afghanistan and his declaration that ‘our war against terror is only beginning.’
The US then began what Yves Beigbeder, an international lawyer who had served at the Nuremberg Trial in 1946, called ‘a virulent, worldwide campaign aimed at destroying the legitimacy of the Court, on the grounds of protecting US sovereignty and US nationals.’ Against the backdrop of the ‘war on terror,’ Congress approved the American Service-Members’ Protection Act (ASPA) of 2002, designed to insulate US military personnel (including private contractors) from ICC jurisdiction. The ASPA placed numerous restrictions on US interaction with the ICC, including the prohibition of military assistance to countries cooperating with the court. Also in 2002, the US sought (unsuccessfully) a UN Security Council resolution to permanently insulate all US troops and officials involved in UN missions from ICC jurisdiction. In late 2004 Congress approved the Nethercutt Amendment, prohibiting assistance funds, with limited exceptions, to any country that is a party to the Rome statute.
These attacks on the ICC culminated on September 2, 2020, when the Trump administration imposed sweeping sanctions on Fatou Bensouda, a former minister of justice in Gambia, who was then the ICC’s chief prosecutor, and Phakiso Mochochoko, a lawyer and diplomat from Lesotho, who heads a division of the court. The US acted under an executive order that declared their activities a ‘national emergency.’ The emergency was ‘the ICC’s efforts to investigate US personnel.’ Trump’s secretary of state, Mike Pompeo, denounced the ICC as ‘a thoroughly broken and corrupted institution.’ A year ago, the Biden administration lifted these sanctions against Bensouda and Mochochoko, saying they were ‘inappropriate and ineffective.’ But the US did not soften its underlying stance, which is that, as Biden’s secretary of state, Antony Blinken, put it,
‘we continue to disagree strongly with the ICC’s actions relating to the Afghanistan and Palestinian situations. We maintain our longstanding objection to the Court’s efforts to assert jurisdiction over personnel of non-States Parties such as the United States and Israel.’
In principle, this hostility to the ICC is rooted in the objection that the court is engaged in an intolerable effort to bind the US to a treaty it has not ratified—in effect, to subject the US to laws to which it has not consented. If this were true, it would indeed be an unacceptable and arbitrary state of affairs. But this alleged concern is groundless. The ICC does not claim any jurisdiction over states—it seeks to prosecute individuals.
This distinction was vital to the Nuremberg Tribunal, which stressed that ‘crimes against international law are committed by men, not by abstract entities, and only by punishing individuals who commit such crimes can the provisions of international law be enforced.’ Moreover, the US is already a party to the treaties that define the crimes the ICC is empowered to prosecute. The ICC follows the precedents and practices of international criminal tribunals that the US enthusiastically supported and participated in: the Nuremberg and Tokyo trials after World War II, and the courts established in the 1990s to prosecute those responsible for atrocities in Yugoslavia and Rwanda. If the ICC is illegitimate, so were those courts.
The brutal truth is that the US abandoned its commitment to the ICC not for reasons of legal principle but from the same motive that animated Putin. It was engaged in aggressive wars and did not want to risk the possibility that any of its military or political leaders would be prosecuted for crimes that might be committed in the course of fighting them. That expediency rather than principle was guiding US attitudes became completely clear in 2005. The US decided not to block a Security Council resolution referring atrocities in the Darfur region of Sudan to the ICC prosecutor. (It abstained on the motion.) It subsequently supported the prosecution at the ICC of Sudanese president Omar al-Bashir and the use by the Special Court of Sierra Leone of the ICC facilities in The Hague to try former Liberian president Charles Taylor for crimes committed in Sierra Leone.
This American support was welcome, but it has been almost as damaging to the ICC as the outright hostility of the US had been. It suggested that in the eyes of the US, the only real war crimes were those committed by Africans. To date, the thirty or so cases taken before the ICC all involve individuals from Central African Republic, Côte d’Ivoire, Sudan, Democratic Republic of the Congo, Kenya, Libya, Mali, or Uganda. This selectivity led the African Union to label the ICC a ‘neo-colonial court’ and urged its members to withdraw their cooperation from its prosecutions. However false the charge, it is easy to see how credible it might seem when the US has alternately endorsed the legitimacy of the ICC in prosecuting Africans and called the same court corrupt and out of control when it explores the possibility of investigating war crimes committed by Americans. [….]
A yawning gap has opened between Biden’s grandiloquent rhetoric about Putin’s criminality on the one side and the deep reluctance of the US to lend its weight to the institution created by the international community to prosecute such transgressions of moral and legal order. It is a chasm in which all kinds of relativism and equivocation can lodge and grow. The longer the US practices evasion and prevarication, the easier it is for Putin to dismiss Western outrage as theatrical and hypocritical, and the more inclined other countries will be to cynicism.
It has been said repeatedly since February 24 that if the democracies are to defeat Putin, they must be prepared to sacrifice some of their comforts. Germany, for example, has to give up Russian natural gas. What the US must give up is the comfort of its exceptionalism on the question of war crimes. It cannot differentiate itself sufficiently from Putin’s tyranny until it accepts without reservation that the standards it applies to him also apply to itself. The way to do that is to join the ICC.”
Recommended Reading
Relevant Bibliographies freely available for viewing and download on my Academia page:
Posted at 06:57 AM in Patrick S. O'Donnell | Permalink | Comments (0)
I wrote an essay, “Toward Socialism,” fairly quickly, several years ago (2018). I wanted it to be succinct as such things go, believing I would later expand on its principal assumptions, premises, and arguments, but today I find myself content with it as it stands. It would be helpful, however, to have a separate piece on just what socialism entails (and might entail) for a Liberal welfare state such as ours (as in the work of David Schweickart and the late Erik Olin Wright); some of which may apply to other countries with more benevolent, generous and equal (which in effect means increased freedom) welfare state regimes, as well as states around the world both more democratic and less democratic than the U.S. (increasingly, there are states significantly more democratic than the U.S., assessed by the various criteria used by those institutes and organizations ‘measuring’ such things).
The following bibliographies freely available on my Academia page contain reading and research material that one can consider background and context for the essay:
Posted at 07:42 AM in Patrick S. O'Donnell | Permalink | Comments (0)
“The Mòzǐ (墨子) is a foundational text in Chinese ethics, political theory, epistemology, logic, semantics, just war theory, economics, and science. One of the classical ‘various masters’ texts of the Chinese philosophical tradition, its systematic ethical and political theories constitute a milestone in the development of ancient philosophy, East or West. The Mohist were the first thinkers in the world to develop a consequentialist ethical theory, a brand of normative ethics that remains influential to this day. They may have been the earliest ethical thinkers to emphasize the notions of impartiality and equality. They were the first in the Chinese tradition to offer sustained, rigorous arguments for their views and the first to develop an explicit methodology of argumentation, based mainly on analogical judgment and reasoning, which influenced the rhetoric of all their contemporaries. Their writings deserve an English translation that allows readers to truly enter into and understand the Mohists’ world of thought.”—From the Introduction to Chris Fraser’s (Trans. with an Introduction and Notes) Mòzǐ—The Essential Mòzǐ: Ethical, Political, and Dialectical Writings (Oxford University Press, 2020).
See too in the Stanford Encyclopedia of Philosophy (SEP): Mohism.
Related Bibliographies:
Posted at 07:43 AM in Patrick S. O'Donnell | Permalink | Comments (0)
While the U.S. celebrates Labor Day (another telling instance of the ideological meaning of American ‘exceptionalism’) on the first Monday in September, May 1st is recognized around the world as a workers’ holiday, a global (or international) day of solidarity between workers of all nationalities. It was bound up with the struggle for the shorter workday – a demand of major political significance for the working class: “Eight hours for work —eight for rest—and eight for what we will.” Thus, in many parts of the world today is the true “Labor Day.” The auspicious nature of this date goes back to celebratory spring festivals and is still an excuse for Morris dancing: in the words of Emma Goldman, “If I can’t dance, I won’t be part of your revolution!” Eric Hobsbawm writes that
“From the start the occasion attracted and absorbed ritual and symbolic elements, notably that of a quasi-religious or numinous celebration (‘Maifeier’), a holiday in both sense of the word. […] Red flags, the only universal symbols of the [socialist labor] movement, were present from the start, but so, in several countries, were flowers: the carnation in Austria, the red (paper) rose in Germany, sweet briar and poppy in France, and the may, symbol of renewal, increasingly infiltrated, and from the mid-1990s replaced by the lily-of-the-valley, whose associations were unpolitical. Little is known about this language of flowers which, to judge by the May Day poems in socialist literature also, was spontaneously associated with the occasion. It certainly struck the key-note of May Day, a time of renewal, growth, hope and joy (we recall the girl with the flowering branch of may associated in popular memory with 1891 May Day shootings at Fourmies). Equally, May Day played a major part in the development of the new socialist iconography of the 1890s in which, is spite of the expected emphasis on struggle, the note of hope, confidence and the approach of a brighter future—often expressed in metaphors of plant growth—prevailed.” — Eric Hobsbawm, “Mass-Producing Traditions: Europe, 1870-1914,” in Eric Hobsbawm and Terence Ranger, eds. The Invention of Tradition (Cambridge University Press, 1983)
“An ancient holiday marked by celebrations in praise of spring and by symbolic evocations of fertility, this day perhaps inevitably became revolutionary holiday of the nineteenth-century workers’ movement. As in British artist Walter Crane’s famous May Day drawings (much reprinted in the U.S. Socialist press), the vision of socialism seemed to speak at once to the natural yearnings of emancipation from the winter season and from the wintery epoch of class society.
In May 1886 several hundred thousand American workers marched into international labor history when they demonstrated for the eight-hour day. An unusual and informal alliance between the fledgling AFL [American Federation of Labor], local assemblies of the Knights of Labor, and disparate tendencies within the anarchist movement ignited a pent-up demand for shorter working hours. The social and labor ferment that crested in 1885-86 also marked the maturing of the Knights of Labor into the first meaningful national labor organization in the United States. The leadership of the Knights, however, envisioned the eight-hour day as an educational, political, and evolutionary achievement rather than an agitational and revolutionary one. On the other hand, the infant AFL, soon to molt from the impotent Federation of Organized Trades and Labor Unions, tied its star to the militant eight-hour actions. The third grouping in the labor triad comprised that section of the anarchists, mainly European immigrants, who emphasized trade union work as a vehicle to social revolution.
The uneasy and unsettled coalition targeted May 1, 1886, as the day of industrial reckoning. In Boston, Milwaukee, New York City, Pittsburgh, and especially Chicago, tens of thousands of workers rallied and struck for ‘eight hours of work, eight hours of rest, eight hours for what we will.’ The nation’s newspapers warned that the spirit of the Paris Commune was loose in the land and pointed to specific personalities among the anarchists to prove the point. [….]
Whether the May day would have been a one-time workers’ holiday or ‘forever be remembered,’ in the words of Samuel Gompers, ‘as a second Declaration of Independence,’ is a moot point. The events of a few days later projected it into an international framework and seared the conscience of labor activists ever since. A rally by striking lumbermen near the scene of a labor conflict at the McCormick-Harvester works in suburban Chicago led to a clash with scabs at the famous farm-implement company. Chicago police, already seasoned in labor brutality, mortally wounded several demonstrators.
The Chicago anarchists, who only a few days earlier had organized the peaceful eight-hour parade locally, called for a protest demonstration against the killings. The following night, on May 4, a thousand rallied at Haymarket Square in the city. The mayor of Chicago listened warily in the crowd until a thunderstorm sent His Honor and most of the throng home for the evening. Inexplicably, a large contingent of police seemingly waited for the mayor’s departure to forcibly disperse the remaining 200 demonstrators. As the officers rallied into the depleted group, a bomb was thrown into their ranks. Dozens of policemen were injured and eventually seven died, although some may have perished from their comrades’ panicked shooting. That response led to widespread but undocumented wounding of many nameless protesters.
The media’s failed predictions of violent upheaval for the May Day rallies three days earlier were readily transferred to the ‘Haymarket Affair.’ The forces of law and order understood that the carnage at Haymarket, regardless of who threw the missile, could discredit the labor movement and eradicated its more radical European appendages through a nascent Red Scare. The ensuing show trial in Chicago blessed the miscegenation of May Day and the Haymarket bombing in the popular mind.
The concept of May Day had meanwhile spread rapidly to the international workers’ movement, one of American labor’s (and radicals’) most important innovations. In 1889 the International Socialist Congress in Paris, with full knowledge of the American precedent, designated May 1 as an eight-hour holiday for workers of the world.” [….] — Scott Molloy, in Mari Jo Buhle, Paul Buhle, and Dan Georgakas, eds. Encyclopedia of the American Left (Garland Publishing, 1990).
“The international May Day, which dates back to 1889, is perhaps the most ambitious of labour rituals. In some ways, it is a more ambitious and generalized version of the annual combined labour demonstration and festival which we have seen emerging for one highly specific group of workers and confined to single regions in the miners’ demonstrations and galas of two decades earlier. It shared with these the essential characteristic of being a regular public self-presentation of a class, an assertion of power, indeed in its invasion of the establishment’s social space, a symbolic conquest. But equally crucially, it was the assertion of class through an organized movement—union or party. It was the labour army’s annual trooping of colours—a political occasion unthinkable without the slogans, demands, speeches which, even among the self-contained pitmen, increasingly came to be made by national figures representing not the union but the movement as a whole. At the same time, since the class as such was involved, it was also like subsequent gatherings of the same kind—one thinks of the national festivals of L’Humanité in France or Unità in Italy, a family occasion and a popular festival—though one which, in spite of an ample supply of beer and skittles, prided itself on its demonstration of self-control. Just as the Durham miners in 1872 were proud to disappoint the respectable who trembled at the invasion of the black barbarians—we recall the white gloves of the marchers—so a few years ago the Neapolitans took pride in a rather more startling achievement. Nothing, they claimed, had been stolen and nobody cheated during the national festival of Unità, when it took in that notoriously ingenious and light-fingered city.
But the miners’ galas were planned as annual occasions and even at the first tentative one in Durham in 1871 three prizes were offered for the band contest and ‘liberal money prizes for various athletic sports.’ May Day was planned simply as a one-off simultaneous international demonstration for the Legal Eight Hour Day. How much of its force, like that of the red flag, was due to this sense of internationalism, we can only speculate, but certainly a good deal. Annual repetition was imposed on the parties and the International by public demand from the grassroots. Moreover, it was through public participation that a demonstration was turned into a holiday in both the ritual and the festive sense. Engels only came to refer to it as a Maifeier or celebration instead of a demonstration in 1893. On the contrary, the ideologically purer revolutionaries were actually suspicious of merrymaking as politically diversionary, and of folkloric practices as a concession to the spirit of superstition [this sort of puritanical ideological fervor persists in some quarters of the Left to this day]. They would have preferred more glum and militant protest marches [so much for prefigurative revolutionary praxis!]. Leaders with a better sense of the masses, like Adler, Vandervelde and Costa, were better tuned to the wavelength of the masses. As Costa said in 1893, ‘Catholics have Easter; henceforth workers will have their own Easter.’ [In other words, a ritual and festival shorn of the deleterious psychological and ideological effects that come in the narrative wake of vicarious substitutionary atonement which is an integral part of the Passion part of the Easter story.] The Italians, mobilizing a traditional and largely illiterate class, tended to be unusually sensitive to the force of symbol and ceremony. What is more, the specific demand of the original May Day soon dropped into the background. It increasingly turned into an annual assertion of class presence—most successfully so where, against the advice of cautious socialist and union leaders which prevailed in Britain and Germany, it underlined the presence by a symbolic assertion of the fundamental power of workers, the abstention from work by a one-day strike. In many Latin countries it came to be seen as a commemoration of martyrs—the ‘Chicago martyrs,’ and is still sometimes so regarded [the resurgence and transformation of Christian mythology?].
The ritual element in the workers’ May Day —which was, as someone observed, even among radical and revolutionary anniversaries the only one associated exclusively with the proletariat—was immediately recognized by the artists, journalists, poets and versifiers who, on behalf of their parties, produced badges, flags, posters, May Day periodicals, cartoons and other suitable material for the occasion. Their iconographic language echoes the imagery of spring, youth and growth which was spontaneously associated with the day. Flowers were an important part of this imagery and immediately came to be worn, we hardly know how: the carnation in Austria and Italy—eventually became the flower of May Day—the red (paper) rose in Germany, sweet briar and poppy in France, as well as the may; but not the lily-of-the-valley which later came into non-political symbiosis with May Day in France.” — Eric Hobsbawm, Workers: Worlds of Labor (Pantheon Books, 1984): 76-78
A short list of suggested reading for May Day:
Sundry Reflections in Honor of May Day (International Workers’ Day)
“Once again the time has come to take Marx seriously.”—Eric Hobsbawm
“In the Marxist tradition, self-realisation is the full and free actualisation and externalisation of the powers and the abilities of the individual. [….] Under suitable conditions, both [political democracy and economic democracy] can be arenas for joint self-realisation.”—Jon Elster
“We have gone so far as to divorce work from culture, and to think of culture as something to be acquired in hours of leisure; but there can only be a hothouse and unreal culture where itself is not its means; if culture does not show itself in all we make we are not cultured. [….] Industry without art is brutality.”—Ananda K. Coomaraswamy
Eleven Criticisms of Capitalism
—From Erik Olin Wright’s Envisioning Real Utopias (Verso, 2010)
“I have argued that economic Democracy, as a system, will be less alienating than Laissez Faire. To summarize the reasons: Workers will have more participatory autonomy under Economic Democracy, because the degree of workplace democracy will not be restricted by the capitalists’ need to keep open all options for profit. The labor-leisure trade-off should be more in accordance with the general interest under Economic Democracy, because workers will have a greater interest in promoting more flexible, less frantic, more meaningful working arrangements, as well as shorter hours and longer vacations, than do capitalists, who bear the costs and risks of such changers (under Laissez Faire) but do not receive the full benefits. Workers are likely to be more skilled under Economic Democracy, because neither competitive pressures nor the need for control will push so hard toward deskilling.”—David Schweickart, Against Capitalism (Boulder, CO: Westview Press, 1996)
America the Possible: The Values
[….] “Many thoughtful Americans have concluded that addressing our many challenges will require the rise of a new consciousness, with different values becoming dominant in American culture. For some, it is a spiritual awakening—a transformation of the human heart. For others it is a more intellectual process of coming to see the world anew and deeply embracing the emerging ethic of the environment and the old ethic of what it means to love thy neighbor as thyself. But for all, the possibility of a sustainable and just future will require major cultural change and a reorientation regarding what society values and prizes most highly.
In America the Possible, our dominant culture will have shifted, from today to tomorrow, in the following ways:
We actually know important things about how values and culture can be changed. One sure path to cultural change is, unfortunately, the cataclysmic event—the crisis—that profoundly challenges prevailing values and de-legitimizes the status quo. The Great Depression is the classic example. I think we can be confident that we haven’t seen the end of major crises.
Two other key factors in cultural change are leadership and social narrative. Leaders have enormous potential to change minds, and in the process they can change the course of history. And there is some evidence that Americans are ready for another story. Large majorities of Americans, when polled, express disenchantment with today’s lifestyles and offer support for values similar to those urged here.
Another way in which values are changed is through social movements. Social movements are about consciousness raising, and, if successful, they can help usher in a new consciousness—perhaps we are seeing its birth today. When it comes to issues of social justice, peace, and environment, the potential of faith communities is vast as well. Spiritual awakening to new values and new consciousness can also derive from literature, philosophy, and science. [….]
Education, of course, can also contribute enormously to cultural change. Here one should include education in the largest sense, embracing not only formal education but also day-to-day and experiential education as well as the fast-developing field of social marketing. Social marketing has had notable successes in moving people away from bad behaviors such as smoking and drunk driving, and its approaches could be applied to larger cultural change as well.
A major and very hopeful path lies in seeding the landscape with innovative, instructive models. In the United States today, there is a proliferation of innovative models of community revitalization and business enterprise. Local currencies, slow money, state Genuine Progress Indicators, locavorism—these are bringing the future into the present in very concrete ways. These actual models will grow in importance as communities search for visions of how the future should look, and they can change minds—seeing is believing. Cultural transformation won’t be easy, but it’s not impossible either.” [….]—From James Gustave Speth’s “America the Possible: A Manifesto, Part II,” Orion magazine (May/June 2012)
A Civic Minimum: A Reform Programme
Making work pay: “All those who are expected to satisfy a minimum work expectation must receive a decent minimum income in return for doing so. This includes not only a level of post-tax earnings sufficient to cover a standard set of basic needs, but also a decent minimum of health-care and disability coverage…. The model of a minimum wage combined with in-work benefits for the low-paid, including child-care subsidies for low earners, is certainly one credible approach to this task.”
From a work-test to a participation-test: “Work-tests within the welfare system are…legitimate in principle. But in order that different forms of productive contributions can be treated equitably, social policy must be structured in a way that acknowledges the contributive status of care work. This implies a need to offer some public support for care workers, relieving their need to do paid work to maintain access to the generous basic needs package described above. Relevant policies here might include payment of a decent social wage to those engaged in looking after the elderly or the handicapped on a full-time basis and publicly subsidized paternal leave from paid employment. In other words, access to the generous basic-needs package should be conditional not on satisfying a work-test, narrowly construed in terms of paid employment, but on satisfying a broader participation-test, where participation is understood to include paid employment and (at least in addition) specified forms and amounts of care work.”
Towards a two-tiered income support system: “[T]he debate over ‘welfare reform’ is often polarized between supporters of an unconditional basic income that is not subject to any work- or participation-test, nor to any time limit, and supporters of time-limited workfare. An alternative approach…looks to establish a two-tiered system of income support. The first tier, which we may call conventional welfare, would be contractualist in kind. It would offer support through a mix of income-related and universal benefits, but support that is also linked to, and conditional on, productive contribution. While work- or participation-tested, support at this level would not be time-limited. [….] The second tier might then consist of something like the time-limited basic income…. This would be an additional income grant, not subject to any work- or participation-test, but which would be time-limited. Citizens could trigger the entitlement for a fixed amount of time over the full course of their working lives, but would not enjoy it indefinitely.”
Universal capital-grant or social drawing rights: “[We previously] set out the case for instituting a generous capital endowment as a basic right of economic citizenship. … [A] scheme of universal capital grants might in part incorporate the time-limited basic income mentioned above. Otherwise, the grants could be linked to activities that are related to productive contribution in the community, such as education, training, setting up a business, and, perhaps, care work….
Accessions tax: “[We have also made] the case for heavy taxation of wealth transfers (inheritances, bequests, inter vivos gifts). Such taxation is important to help prevent class inequality and violation of reciprocity. There is a strong case for hypothecating the funds from taxation of wealth transfers to the funding of a universal capital-grant scheme.”
“[T]his short list is not, by any means, exhaustive of the policies and institutions that might be necessary, or helpful, [in order to] reform the terms of economic citizenship so as to meet the demands of fair reciprocity (in its non-ideal form).”— Stuart White, The Civic Minimum: On the Rights and Obligations of Economic Citizenship (Oxford University Press, 2003).
Relevant Bibliographies (i.e., lists with varying degrees of family resemblance) on my Academia page:
Posted at 12:03 AM in Patrick S. O'Donnell | Permalink | Comments (0)
The GOP is the political party of imaginary if not delusionary grievances, of crass and clownish schtick, of denial and desperation, of repugnance and regression, of illusion and irrationality, of empty gestures and vain cynicism, of authoritarianism and aspirational fascism, of obscene wealth and amoral power, of sycophants and cults, of self-deception and phantasy, of white supremacy (and racism generally) and narcissistic privilege, of putatively Christian nationalism, a faux populism of bread and circuses that has failed to conceal, let alone contain, a degraded and debased political practice mired in a toxic dump of greed, corruption, and sleaze. Republicans effectively tolerate and thus provide a de facto endorsement of the epidemic of gun violence and mass shootings in this country. The Republican Party continues to peddle QAnon phantasies, continues to minimize the constitutional and democratic threats, dangers and deadly consequences of the Jan. 6th insurrection, and continues its perfervid devotion to purely performative politics, pornographic political theater which routinely and shamelessly invokes the mind-numbing, soul-degrading, and polarizing rhetoric of culture wars to distract their constituents from the patently plain fact that they cannot govern, that they have no coherent or responsible political policies to proffer the voting public.
Posted at 06:48 AM in Patrick S. O'Donnell | Permalink | Comments (0)
Are we, or might we become, artificial intelligences “living” in a virtual or artificial reality (a ‘simulation’)?
“There’s a new creation story going around. In the beginning, someone booted up a computer. Everything we see around us reflects states of that computer. We are artificial intelligences living in an artificial reality — a ‘simulation.’ It’s a fun idea, and one worth taking seriously, as people increasingly do. But we should very much hope that we’re not living in a simulation.”—Eric Schwitzgebel in an op-ed in the LA Times. (Schwitzgebel is a well-known philosopher at UC Riverside who has a blog titled The Splintered Mind, where this article is also found, with links and additional material.)
I don’t think “simulation” is a “fun idea” (outside of science fiction) if only because or especially in the light of the fact that far too many people today are liable to believe virtually any story or idea or scenario, no matter how implausible, divorced from reality, suffused with phantasy, what have you, that gains some traction on social and mass media (one can cite quite a number of examples on this score, much of it owing to QAnon (see here and here), but most recently, consider ‘birds aren’t real’). The notion that we are living in a “simulation” is nonsense if not phantasy or, less harshly perhaps, science fiction. To take it “seriously” accords it worthy of consideration in terms of possibility or probability, which is, I think, dangerous because not true (I cannot prove it to the satisfaction of some philosophers, any more than I can prove that there is, or is not, a God).
Not long ago there was a popular science article with the headline, “New Physics Experiment Indicates There’s No Objective Reality.” The notion of simulation or artificial reality is in the same ballpark, a least in spirit, although I doubt any philosophers of science have drawn the same conclusions that a few physicists and science journalists apparently did from the experiment discussed in the article. In our case, however, philosophers are entertaining hypotheses that legitimate the dreams and fantasies (and perhaps ‘phantasies’ in the pathological sense) of more than a few computer scientists. Perhaps we should extend “reality testing” beyond psychotherapeutic and psychiatric settings to the speculative musings of some scientists and philosophers:
“Reality testing is a concept initially devised by Sigmund Freud which is used by some therapists to assist clients in distinguishing their internal thoughts, feelings and ideas from the events, which are based within reality. In other words, it is the ability to see a situation for what it really is, rather than what one hopes or fears it might be. However the need for reality testing extends beyond a therapeutic setting and the need to appropriately distinguish our inner world from reality is something, which occurs in everyday life.”
However, something more like or closer to the notion of an ”artificial reality” might be said to exist or have some semblance of sense or coherence when we look, for example, at Advaita Vedānta philosophy, with its philosophical and religious ideas of fundamental ignorance (avidyā) and entanglement in māyā (illusion). However, our everyday reality must nonetheless be treated as “real” until such time on has attained the “higher truth”—the aim of jñāna-yoga—which is awareness/knowledge of (nirguna) Brahman. This appears to be a monistic philosophy, yet its monism is decidedly different from modern idealist and realist conceptions (wherein there is one kind of ‘something’ or stuff), for Brahman is (the ‘One without a second’), in the end (and beginning, as it were), beyond predication, beyond conceptualization (hence it is non-rational or supra-rational but not necessarily irrational). And while Advaita philosophers will, so to speak, talk about what Brahman is or is not (here we find rhetorical warrant by way of its role in spiritual motivation and the philosophical and religious milieu in which this tradition was engaged in arguments with other Indic philosophical schools, including other Vedantin ‘schools’), it is nonetheless dogmatically stated to be absolutely indescribable or “ineffable” (cf. the Daodejing, which says the Dao that can be talked about is not the true, or real, or everlasting Dao). This religious philosophy is perhaps best characterized as a “neutral” monism, or better, after Ram-Prasad, metaphysical non-realism, thus an Indic exemplification of apophatic mysticism. I won’t go into the complex details here, but an Advaitan would not, however, refer to the everyday reality we experience in daily life “artificial” or a “simulation,” as it remains real for us, at least until such time as one has the awareness of (nirguna) Brahman, which is not something experienced after death but in this very life. So for all intents and purposes, everyday reality is quite real for most of us, at least epistemically and practically speaking, for it is only illusory from the vantage point of those who’ve had Brahman realization, not unlike the Platonic philosopher who has had a vision of the Good and is compelled to return to the Cave which its denizens view as definitively real.
Another similar philosophical and religious notion is found in some traditions of Buddhism. In the Mahāyāna Buddhist schools, for example, “dependent origination” [(P.) paticca-samuppāda/(S.) pratītya-samutpāda] is characterized by “emptiness” (śūnyatā) meaning the lack of “inherent” existence or essential being (among other things). This emptiness is not “nothingness” but the real mode of being of things, as cause and effect, identity and difference, entity and non-entity function within a cohesive system or complex, interconnected world of dependent origination. As Thupten Jinpa says, dependent origination and emptiness might be thought of as two sides of the same coin. We might ask ourselves at this point how all of this relates to the possibility of nibbāna/nirvāna, defined as the subject’s experience of a liberated or unconditioned mind, the phenomenal character of which is characterized in terms of immeasurable peace and true happiness, “completely untainted by the presence or possibility of mental duhkhā/dukkhā.” A Zen Buddhist will proclaim that form is empty, that all phenomena in the world are illusory. On the other hand, over the centuries a prodigious amount of artwork has been created in association with Zen thought and practice, which one might characterize as, in part, or for metaphysical reasons, illusory, but its illusory nature is availing, serving the purposes of Zen spiritual training. Once more, the ethical and spiritual reasons for and consequences of these ideas is rather complicated, but in both Advaita Vedānta and some schools of Buddhism, the illusory nature of our experiences does not mean we are living in a computer-generated “artificial reality,” or a “simulation,” even if, reality has a “provisional” status when viewed in metaphysical terms within the liberated state of awareness designated as awareness of (nirguna) Brahman or nibbāna.
If you think the aforementioned comparisons from physics and Indian philosophy are a tad overdrawn or unnecessary by way of clarifying boundaries and concepts, practices and purposes, or viable hypotheses from fantasies and (more troubling) phantasies, consider the title of a recently published book: The Simulation Hypothesis: An MIT Computer Scientist Shows Why AI, Quantum Physics and Eastern Mystics All Agree We Are In a Video Game (Bayview Books, 2019).
Given the widespread dispositional tendency to self-deception, denial, wishful thinking (of a deleterious sort) and the corresponding subscription to illusions, myths, delusions, and phantasies that suggest evidence of mass psychoses or forms of shared mental illness (what Fromm memorably termed the ‘pathology of normalcy’), it strikes me as irresponsible for philosophers like Chalmers to accord plausibility to what is charitably or misleadingly termed an “hypothesis” or thought experiment:
“Although the standard argument for the simulation hypothesis traces back to a 2003 article from Oxford philosopher Nick Bostrom, 2022 is shaping up to be the year of the sim. In January, David Chalmers, one of the world’s most famous philosophers, published a defense of the simulation hypothesis in his widely discussed new book, Reality+[: Virtual Worlds and the Problems of Philosophy (W.W. Norton & Co., 2022)] Essays in mainstream publications have declared that we could be living in virtual reality, and that tech efforts like Facebook’s quest to build out the metaverse will help prove that immersive simulated life is not just possible but likely — maybe even desirable.”
Ours is a generation quite familiar with and perhaps inordinately fond of such popular scientific fictions films as 2001: A Space Odyssey (in which ‘HAL is an artificial intelligence, a sentient, synthetic, life form’) and The Matrix (which ‘depicts a dystopian future in which humanity is unknowingly trapped inside a simulated reality, the Matrix, which intelligent machines have created to distract humans while using their bodies as an energy source’). In the latter, “a copy of Jean Baudrillard’s philosophical work Simulacra and Simulation [in French, 1981] … is visible on-screen as ‘the book used to conceal disks,’” although Baudrillard “said that The Matrix misunderstands and distorts his work.” So perhaps it should not surprise us that our scientists and philosophers are transforming if not transfiguring science fiction and fantasy into simulation hypotheses that include the possibility of “virtual reality” one day effacing the substantive metaphysical and ontological boundaries and differences with reality as we know (and have known it). This reminds one of how utopian imagination thought and imagination has often been misunderstood and misused as pointing to imminent historical possibilities or pictures of the future. That accounts in part for the “Liberal anti-utopianism” of such philosophical luminaries as Karl Popper, Hannah Arendt, and Isaiah Berlin who came to “dismiss utopians or their sympathizers as foolhardy dreamers at best and murderous totalitarians at worst.” Hence we often find the term “utopian” used in a pejorative sense, despite the fact that this represents a tragic yet remediable misunderstanding of the moral, social, and political function of utopian thought and imagination.
Returning to Schwitzgebel:
“Scientists and philosophers have long argued that consciousness should eventually be possible in computer systems. With the right programming, computers could be functionally capable of independent thought and experience. They just have to process enough information in the right way, or have the right kind of self-representational systems that make them experience the world as something happening to them as individuals. In that case, the argument goes, advanced engineers should someday be able to create artificially intelligent, conscious entities: ‘sims’ living entirely in simulated environments. These engineers might create vastly many sims, for entertainment or science. And the universe might have far more of these sims than it does biologically embodied, or ‘real,’ people. If so, then we ourselves might well be among the sims.
In that case, the argument goes, advanced engineers should someday be able to create artificially intelligent, conscious entities: ‘sims’ living entirely in simulated environments. These engineers might create vastly many sims, for entertainment or science. And the universe might have far more of these sims than it does biologically embodied, or ‘real,’ people. If so, then we ourselves might well be among the sims.”
Poppycock! At least in reaction to the last sentence in each of these two paragraphs. This is literary phantasy or science fiction. I find the arguments here implausible for motley reasons, some of which are canvassed in my post on Turing’s AI philosophy (which was not static and not always pellucid, conceptually speaking). I well realize that some science fiction (much like some, or at least aspects of some utopias) has, over time, become reality, but to speak of “computers [that] could be functionally capable of independent thought and experience” is metaphysical, ontological, and psychological nonsense. It reveals a failure to understand just what thought and experience among human animals entails, what they mean, both conceptually and practically. It represents, in my view, a colossal failure to think long and hard about human nature, what makes for a human animal (here is where I find many of the books by Raymond Tallis to be indispensable). And it shows how much of what counts as “philosophy of mind” in contemporary philosophy, when not wholly reductionist (or ‘eliminativist’) is, consciously or not, a mere underlaborer for sullied sciences and technology enchanted by the Promethean promises of science fiction and phantasy.
If I understand him correctly, Schwitzgebel leaves open the possibility there may one day be “sophisticated simulations containing genuinely conscious artificial intelligences.” I believe that scenario to be impossible. A more sensible Schwitzgebel concludes his LA Times article (it is reprinted at his blog but with a postscript): “A large, stable planetary rock is a much more secure foundation for reality than bits of a computer program that can be deleted at a whim.”
Relevant Bibliographies
Relevant Notes
This is also available for viewing or download on my Academia page as The Simulation Argument and Hypothesis in Philosophy.
Posted at 03:33 PM in Patrick S. O'Donnell | Permalink | Comments (0)
Introduction
I address a few of the arguments of Turing out of respect for his intellectual brilliance, as conveyed in this introduction.
“Alan (Mathison) Turing (23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist. Turing was highly influential in the development of theoretical computer science, providing a formalisation of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. He is widely considered to be the father of theoretical computer science and artificial intelligence.
* * *
“Alan Turing did not fit easily with any of the intellectual movements of his time, aesthetic, technocratic or marxist. In the 1950s, commentators struggled to find discreet words to categorise him: as ‘a scientific Shelley,’ as possessing great ‘moral integrity.’ Until the 1970s, the reality of his life was unmentionable. He is still hard to place within twentieth-century thought. He exalted the science that according to existentialists had robbed life of meaning. The most original figure, the most insistent on personal freedom, he held originality and will to be susceptible to mechanisation. The mind of Alan Turing continues to be an enigma.
But it is an enigma to which the twenty-first century seems increasingly drawn. The year of his centenary, 2012, witnessed numerous conferences, publications, and cultural events in his honor. Some reasons for this explosion of interest are obvious. One is that the question of the power and limitations of computation now arises in virtually every sphere of human activity. Another is that issues of sexual orientation have taken on a new importance in modern democracies. More subtly, the interdisciplinary breadth of Turing's work is now better appreciated. A landmark of the centenary period was the publication of Alan Turing, his work and impact (eds. Cooper and van Leeuwen, 2013), which made available almost all aspects of Turing's scientific oeuvre, with a wealth of modern commentary. In this new climate, fresh attention has been paid to Turing's lesser-known work, and new light shed upon his achievements. He has emerged from obscurity to become one of the most intensely studied figures in modern science.”
* * *
“Throughout his life, Alan Turing’s fearless approach to daunting problems helped him break new conceptual ground. From his time at Cambridge, when he published papers now recognised as the foundation of computer science, through his vital work at Bletchley Park cracking German codes – shortening the Second World War by years – to his exploration of the notion of artificial intelligence and his fascination with the application of mathematics to the biological world. At The [Alan Turing] Institute we aim to adopt a similarly ground-breaking, multi-faceted approach to our research. Despite being a singular genius, Turing was also a great collaborator, both with the hundreds of women and men at Bletchley Park, and throughout his career working with other mathematicians, engineers and scientists.
The biography Alan Turing: The Enigma by Andrew Hodges includes the following quote from Turing, which sums up the spirit in which the Institute operates: ‘The isolated man does not develop any intellectual power. It is necessary for him to be immersed in an environment of other[s]…. The search for new techniques must be regarded as carried out by the human community as a whole, rather than by individuals.’
Turing’s life was tragically affected by the societal norms of his time: despite his pivotal part in ensuring the safety of the nation and saving countless lives, his homosexuality resulted in him being defined as a security risk, and he was harassed by police surveillance up until his untimely death in 1954. Though we now live in a more progressive and open society, at the Institute we recognise the importance of actively ensuring anyone in ‘the human community’ can contribute effectively to changing the world through data science. We do this through our commitment to equality, diversity and inclusion, demonstrated by events such as ‘Gamechangers for diversity in STEM.’
On Turing’s influence on the modern world of data science Vinton Cerf, Chief Internet Evangelist for Google, says: ‘His practical realisations of computing engines shed bright light on the feasibility of purposeful computing and lit the way towards the computing rich environment we find in the 21st Century.’ Our programme in Data Science at Scale continues this legacy, identifying the ways in which computers and algorithms can be better designed to fulfil a huge range of purposes and tasks. And our Research Engineering team, which likes to think of itself as an echo of the Bletchley Park ‘Hut 8’ group led by Turing, helps the Institute develop practical data science tools.
The mathematical foundations strand of our Data-centric Engineering programme also recognises that delivering reliable and robust data science solutions requires rigorous theoretical research and practices. It’s a notion which aligns well with the ‘from first principles’ approach Turing often adopted in his work.
Turing’s revolutionary ideas in cryptography were developed in service of public safety and security, and the Institute’s programme in Defence and Security is continuing this purpose. For example, we have multiple projects looking at ways to store sensitive data, such as health records, in the cloud, in a way that not only allows the data to remain encrypted, but also makes them accessible to publicly beneficial research, without compromising anyone’s privacy.”
* * *
I do not think that it is true or fair to say that “artificial intelligence” (hereafter, AI) replicates human cognitive abilities, although it may “replicate” in a very attenuated or simply analogical sense, one aspect or feature of one particular cognitive ability, formal logical reasoning, and even then, in as much as human cognitive abilities do not function in isolation, in other words, as they work more or less in tandem and within a larger cognitive (and affective, etc.) human context, the replication that takes place in this case is not in any way emulative of human intelligence as such. AI is not about emulating or instantiating (peculiarly) human intelligence, but rather a technological replication of aspects of formal logic that can be mathematized (such as algorithms), thus it is a mistake to describe the process here as one of “automated reasoning” (i.e., AI machines don’t ‘reason,’ they compute and/or process), if only because our best philosophical and psychological conceptions of rationality and reasoning cast a net—as the later Hilary Putnam often reminded us—far wider than anything that can, in fact or principle, be scientized, logicized, or mathematized (i.e., formalized).
Moreover, AI only occurs as the result of the (creative, collaborative, managerial …) work of human designers, programmers, and so on, meaning that it does not in any real sense add up to the construction of “autonomous” (as that predicate is used in moral philosophy and moral psychology) machines or robots, although perhaps we could view this as a stipulative description or metaphor, albeit one still parasitic (and to that extent misleading) on conceptions of human autonomy. And insofar as a replication is in reference to a copy, in this case the copy is not an exact replica of the original. We should therefore be critically attentive to the metaphors and analogies (but especially the former) used in discussions of AI: “learning,” “representation,” “intelligence” (the adjective ‘artificial’ thus deserving more descriptive and normative semantic power), “autonomous,” “mind,” “brain,” and so forth; these often serve, even if unintentionally, to blur distinctions and boundaries, muddle our thinking, create conceptual confusion, obscure reality, and evade the truth. Often extravagant claims are made (by writers in popular science, scientists themselves, mass media ‘experts’ and pundits, philosophers, corporate spokespersons or executives, venture capital and capital investors generally, and so forth) to the effect that AI computers or machines possess powers uncannily “like us,” that is, they function in ways that, heretofore at least, were demonstrably distinctive of human (and sometimes simply animal) powers and capacities, the prerogative, as it were, and for better and worse, of human beings, of persons, of personal normative agency.
Recent attempts to articulate something called “algorithmic accountability,” meaning a concern motivated by the recognition that data selection and (so to speak) algorithmic processing often encode “politics,” psychological biases or prejudices, or stereotypical judgments, are important, but attributions of accountability and responsibility, be they individual or shared (in this case, almost invariably the latter), can only be human, not “algorithmic” in the first instance, hence that notion lacks any meaningful moral or legal sense unless it is a shorthand reference to the human beings responsible for producing or programming the algorithms in the first place.
A fair amount of the philosophical, legal, and scientific (including computer science) literature—both its questions and propositions—on AI (‘artificial intelligence’), including robots, “autonomous artificial agents, and “smart machines,” is replete with implausible presuppositions and assumptions, as well as question-begging premises such the arguments can be characterized by such terms as implausibility, incoherence, and even “nonsense” (or failure to ‘make sense’). In short, the conceptual claims upon which its statements and premises depend, that which is at the very heart of its arguments, often make no sense, that is to say, they “fail to express something meaningful.”
Sometimes even the very title of a book will alert us to such nonsense, as in the case of a volume edited by Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong (Oxford University Press, 2009). The book title begs the question, first with the predicate “moral,” and correlatively, with the phrase, “teaching robots right from wrong,” which depends upon concepts heretofore never applied outside human animals or persons (we can speak of ‘teaching’ at least some kinds of animals, but it makes no sense to speak of teaching them ‘right from wrong’), and thus it is eminently arguable as to whether or not we can truly “teach” robots anything, let alone a basic moral orientations, in the way, say, that we teach our children, our students, or each other, whether in informal or formal settings. The novelty of the claim, as such, is not what is at issue, although radical and unprecedented manner in which it employs concepts and words should provoke presumptive doubts as to whether or not our authors have a clear and plausible picture of what it means for “us” to “be moral,” what it typically means for us to teach someone “right from wrong,” or how someone learns this fundamental moral distinction. We might employ such words in a stipulative or even theoretical sense, for specific and thus very limited purposes that are parasitic on conventional or lexical meaning(s), or perhaps simply analogical or metaphorical at bottom, but even in those cases, one risks misleading others by implying or evoking eminently questionable or arguable presuppositions or assumptions that make, in the end, for more or less conceptual confusion if not implausibility.
According to our editors, respectively a consultant and writer affiliated with Yale’s Interdisciplinary Center for Bioethics and a Professor of History and Philosophy of Science and of Cognitive Science, “today’s [computer] systems are approaching a level of complexity … that requires the systems to make moral decisions—to be programmed with ‘ethical subroutines’ to borrow a phrase from Star Trek” (the blurring of the lines between contemporary science and science fiction, or the belief that much that was once science fiction on this score is no longer fiction but the very marrow of science itself, is commonplace). This argument depends, I would argue, on a rather implausible model of what it means for us to make “moral decisions,” as well as on an incoherent or question-begging application of the predicate “ethical.” Wallach and Allen open the Introduction with the breathless statement that scientists at the Affective Computing Laboratory at the Massachusetts Institute of Technology (MIT) “are designing computers that can read human emotions,” as if this is a foregone conclusion awaiting technical development or completion. Human beings are not always adept at reading emotions insofar as we can hide them or simply “fake it,” as it were. In any case, as I will later argue, a machine cannot understand what constitutes a human emotion. For now, an assertion will have to suffice: The expression of emotions in persons is in fact an incredibly complex experience, involving both outward and inner dimensions (some of which are cognitive), biographical history, relational contexts and so forth, all of which are part, in principle or theory, of an organic whole, that is, the person. In the words of P.M.S. Hacker,
“Emotions and moods are the pulse of the human spirit. They are both determinants and expressions of our temperament and character. They are tokens of our engagement with the world and with our fellow human beings. [….] [T]he emotions are also perspicuously connected with what it, or is thought to be, good and bad. Our emotional pronenesses and liabilities are partly constitutive of our temperament and personality. Our ability to control our emotions, to keep their manifestations and their motivating force within the bounds of reason, is constitutive of our character as moral agents. So the investigation of the emotions is a fruitful prolegomenon to the philosophical study of morality. It provides a point of access to the elucidation of right and wrong, good and evil, virtue and vice, that skirts the morass of deontological and consequentialist approaches to ethics without neglecting the roles of duties and obligations, or the role of the consequences of our actions in our practical reasoning.”
Last year I argued these points about the putative possibility of computers reading emotions in further detail but as this post is long enough, I will leave it at that and move on to other things.
Consider these respective definitions of “learning” and “pedagogy:”
(i) “Learning is the process [the human experience] of acquiring new understanding, knowledge, behaviors, skills, values, attitudes, and preferences.”
(ii) “Pedagogy, most commonly understood as the approach to teaching, is the theory and practice of learning, and how this process influences, and is influenced by, the social, political and psychological development of learners.”
Assuming the above definitions and characterizations are roughly—or in the main—correct and true, what does it mean to say that machines are capable of “learning?” How do machines learn? In what ways is that the same as, resemble, or mimic human learning (I’m leaving out nonhuman animals for now)?
When Alan Turing said “The whole thinking process is still rather mysterious to us, but I believe that the attempt to make a thinking machine will help us greatly in finding out how we think ourselves,” it is not clear precisely what he meant, but one thing we can say at this point, is that the development of “artificial intelligence” (AI) has quickened our appreciation of how much thinking, intelligence, and learning among human beings is radically different than what we observe or have achieved with AI in computers and robots. Indeed, whatever “learning” takes place with computers is wholly dependent in the first instance on computer programmers, and the so-called learning programs they develop are not at all similar to the way we learn (which, at bottom, is based on experience, on consciousness, and a mind, none of which are properties of a computer). Consider, if you will, this story from an article by the philosopher Sebastian Sunday Grève in Aeon, “AI’s First Philosopher,” which motivated me to address a few of Turing’s more philosophical ideas and arguments he shared and summarized:
“Due to his true scientific interests in the development of computing technology, Turing had quickly become frustrated by the ongoing engineering work at the National Physical Laboratory, which was not only slow due to poor organisation but also vastly less ambitious in terms of speed and storage capacity than he wanted it to be. In mid-1947, he requested a 12-month leave of absence. The laboratory’s director, Charles Darwin (grandson of the Charles Darwin), supported this, and the request was granted. In a letter from July that year, Darwin described Turing’s reasons as follows:
‘He wants to extend his work on the machine still further towards the biological side. I can best describe it by saying that hitherto the machine has been planned for work equivalent to that of the lower parts of the brain, and he wants to see how much a machine can do for the higher ones; for example, could a machine be made that could learn by experience?’
While provocative, the question trades on conceptual confusion or ignorance about the nature of human experience: simply put, machines cannot and never will have experiences. That is, if you will, a metaphysical or ontological fact (and Turing deliberately if unsuccessfully avoided explicitly addressing such topics).
In a 1948 paper by Turing that received widespread attention only much later, he proclaims that “analogy with the human brain is used as a guiding principle.” This prescription was taken to heart within the computers sciences and the field of AI, as analogies, direct and indirect, arising or derived from connectionist (neural networks, etc.) approaches to AI (neural networks, have been combined with earlier and now more elaborate or sophisticated deductive logical forms mathematical algorithms in the field. One ironic consequence of this approach is that while a model of the human brain (in both an analogical and metaphorical sense) was to be a “guiding principle” for AI research, what became commonplace in such fields as psychology, neuroscience, and philosophy was talk and theoretical models that speak of the mind or the brain as like a—or even some sort of—computer, effectively reversing the relevant similarities and correspondences! In a later lecture Turing recognizes and effectively endorses this new and radically reductionist picture or model:
“If any machine can appropriately be described as a brain, then any digital computer can be so described … If it is accepted that real brains … are a sort of machine it will follow that our digital computer, suitably programmed, will behave like a brain.”
Be careful, “if…” does a lot of work here. Are “real brains” (let alone minds, which are not brains), in his words, “a sort of machine?” Yet Turing has a tendency to later qualify or deflate his more extravagant hopes and dreams, as evidenced in the following:
“‘The fact is,’ he went on to explain, ‘that we know very little about it [how to program a machine to behave like a brain], and very little research has yet been done.’ He adds: ‘I will only say this: that I believe the process should bear a close relation [to] that of teaching.’”
And now we come back to the parts played by learning and pedagogy among human beings, in which case the field of AI is not involved in practices that clearly involve a “relation to teaching,” let alone a “close relation.”
Here is where Grève’s article is revealing:
“… [A] fresh look at the 1950 paper shows that Turing’s aim clearly went beyond merely defining thinking (or intelligence) – contrary to the way in which philosophers such as Searle have tended to read him – or merely operationalising the concept, as computer scientists have often understood him. In particular, contra Searle and his ilk, Turing was clearly aware that a machine’s doing well in the imitation game is neither a necessary nor a sufficient criterion for thinking or intelligence. This is how he explains the similar test that he presents in the radio discussion:
‘You might call it a test to see whether the machine thinks, but it would be better to avoid begging the question, and say that the machines that pass are (let’s say) ‘Grade A’ machines … My suggestion is just that this is the question we should discuss. It’s not the same as ‘Do machines think,’ but it seems near enough for our present purpose, and raises much the same difficulties.’
This passage, along with Turing’s other writings and public speeches on the philosophy of AI (including all those described above), has received little attention. However, taken together, these writings provide a clear picture of what his primary goal was in formulating the imitation game. For instance, they show that, from 1947 onwards (and perhaps earlier), in pursuit of the same general goal, Turing in fact proposed not one but many tests for comparing humans and machines. These tests concerned learning, thinking and intelligence, and could be applied to various smaller and bigger tasks, including simple problem-solving, games such as chess and Go, as well as general conversation. But his primary goal was never merely to define or operationalise any of these things. Rather, it was always more fundamental and progressive in nature: namely, to prepare the conceptual ground, carefully and rigorously in the manner of the mathematical philosopher that he was, on which future computing technology could be successfully conceived, first by scientists and engineers, and later by policymakers and society at large.
It is widely overlooked that perhaps the most important forerunner of the imitation game is found in the short final section of Turing’s long-unpublished AI research paper of 1948, under the heading ‘Intelligence as an Emotional Concept.’ This section makes it quite obvious that the central purpose of introducing a test such as the imitation game is to clear away misunderstandings that our ordinary concepts and the ordinary use we make of them are otherwise likely to produce. As Turing explains:
‘The extent to which we regard something as behaving in an intelligent manner is determined as much by our own state of mind and training as by the properties of the object under consideration. If we are able to explain and predict its behaviour or if there seems to be little underlying plan, we have little temptation to imagine intelligence.’
We want our scientific judgment as to whether something is intelligent or not to be objective, at least to the extent that our judgment will not depend on our own state of mind; for instance, on whether we are able to explain the relevant behaviour or whether we perhaps fear the possibility of intelligence in a given case. For this reason – as he also explained in each of the three radio broadcasts and in his 1950 paper – Turing proposed ways of eliminating the emotional components of our ordinary concepts. In the 1948 paper, he wrote:
‘It is possible to do a little experiment on these lines, even at the present stage of knowledge. It is not difficult to devise a paper machine [i.e., a program written on paper] which will play a not very bad game of chess. Now get three men as subjects for the experiment A, B, C. A and C are to be rather poor chess players, B is the operator who works the paper machine. (In order that he should be able to work it fairly fast it is advisable that he be both mathematician and chess player.) Two rooms are used with some arrangement for communicating moves, and a game is played between C and either A or the paper machine. C may find it quite difficult to tell which he is playing.’
It is true that, in addition to his conceptual work, Turing advanced numerous philosophical arguments to defend the possibility of machine intelligence, anticipating – and, arguably, refuting – all of the most influential objections (from the Lucas-Penrose argument to Hubert Dreyfus to consciousness). But that is markedly different from providing metaphysical arguments in favour of the existence of machine intelligence, which Turing emphatically refused to do.”
So, while it is undeniable that “Turing advanced numerous philosophical arguments to defend the possibility of machine intelligence,” those arguments must be assessed with plausible if not sound and persuasive arguments explaining just what constitutes human intelligence, arguments which are found in the literature, thus I am confident that it’s been decisively demonstrated that whatever “machine intelligence” is, it is in many relevant respects quite different from human intelligence, even if the latter often, in our world, draws upon the former, recalling that the former, in the first and last instance, is in fact wholly dependent on the latter.
Finally, Grève writes that “Turing advanced numerous philosophical arguments to defend the possibility of machine intelligence, anticipating – and, arguably, refuting – all of the most influential objections.” I doubt it is even “arguable” (which does not rule out fresh arguments) indeed, I don’t think it is true that the most influential objections have been refuted, for those objections and corresponding arguments rely on conceptions of intelligence that radically differ from those used by Dreyfus, Descombes, Hacker, Bennett, Tallis, et al. From my vantage point, Turing at times is writing science fiction or speculative philosophy, which of course he was free to do, but too many intellectual fields related to AI (e.g., cognitive science and psychology, the neuroscience, mathematics, and philosophy), have exploited if not manipulated these facets of his work on the order of utopian blueprints for research programs that often detract from if not avoid more modest uses of AI properly cabined by fundamental ethical and moral principles (found largely outside computer science and the sciences more generally) as well as the sundry constraints (political, socio-economic, environmental, etc.) that arise from our deep and abiding concern with human welfare, well-being and flourishing within the parameters framed by conceptions of human dignity and human rights.
Here is a list of titles (far from exhaustive and perhaps a bit idiosyncratic) I think can help us properly assess some of the statements and arguments (including presuppositions and assumptions) made by Turing about AI, as well as, and perhaps more importantly or urgently, updated versions of same made by contemporary enthusiasts of AI and robotics, the latter prone to capitalist technophilia and unabashed indulgence in scientific phantasies in the name of Promethean-like promises of civilizational progress that renders modest the optimism of the European Enlightenment:
Posted at 12:03 AM in Patrick S. O'Donnell | Permalink | Comments (2)
Morgenbesser in response to B.F. Skinner: “Are you telling me it’s wrong to anthropomorphize people?”
During campus protests of the 1960s, Sidney Morgenbesser was hit on the head by police. When asked whether he had been treated unfairly or unjustly, he responded that it was ‘unjust, but not unfair. It was unjust because they hit me over the head, but not unfair because they hit everyone else over the head.’
When asked his opinion of pragmatism, Morgenbesser replied, ‘It’s all very well in theory but it doesn’t work in practice.’
What does this fairly wide array of well-known philosophers (save perhaps Rosen): Hilary Putnam, Jerry Fodor, Raymond Geuss, Alvin Goldman, Daniel M. Hausman, Robert Nozick, Gideon Rosen, Naomi Zack, and Michael Stocker [apologies to those who might also have been included in this list], have in common?
I was startled to learn that they were all students of the philosopher Sidney Morgenbesser (September 22, 1921 – August 1, 2004), who seems to have exemplified what it means to be both a Socratic gadfly and a Socratic midwife, although The New York Times Magazine dubbed him the “Sidewalk Socrates” (cf. Socrates in the agora). Inspired by and synthesizing these characterizations, I’m inclined to believe his pedagogical methods were maieutic, therapeutic, and dialectic, incarnating the philosophically metaphorical equivalent of “street medicine.” Morgenbesser was a Jewish American philosopher (as Geuss notes, ‘Sidney had had rabbinical training at the Jewish Theological Seminary, and had one time been attracted by a movement called ‘Reconstructionist Judaism’….) and professor at Columbia University who “wrote little but is remembered by many for his philosophical witticisms … and humor.” His “areas of expertise included the philosophy of social science, political philosophy, epistemology, and the history of American pragmatism,” co-founding “the Society for Philosophy and Public Affairs [the journal of which is Philosophy & Public Affairs] along with G.A. Cohen, Thomas Nagel and others.”
While recognizing the name but knowing next to nothing about him, I looked up information on Morgenbesser after reading Raymond Geuss’s discussion of his philosophical ideas (‘a great admirer of the pragmatism of John Dewey’) and the profound influence it appears to have had on Geuss himself as revealed in his book, Who Needs a World View? (Harvard University Press, 2020), even though, as he writes—when Geuss was no longer a student but now a colleague—“Sidney and I eventually had a terminal falling out.”
The above encomia were reinforced when I read in the Preface to Arthur C. Danto’s Narration and Knowledge (Columbia University Press, 1985 [revised ed. of Analytical Philosophy of History, 1968]) that Morgenbesser was a “close friend and colleague” of Danto whom he describes as “a man of warmth, wit, and extraordinary philosophical acuity.” Furthermore, “His own submission to the highest standards of philosophical integrity stands as a kind of conscience upon all who know him.” Elsewhere, in a memorial notice on Morgebesser’s death, Danto writes he “heard secondhand that someone said that while I did philosophy, Sidney lived philosophy.”
Posted at 02:23 PM in Patrick S. O'Donnell | Permalink | Comments (0)
Adolph Gottlieb, Phoenix Burst, 1973
The other day I learned of a new and intriguing entry in the online Stanford Encyclopedia of Philosophy (SEP), “Arabic and Islamic Philosophy of Mathematics.” Although mathematics is not my cup of tea, mainly because I’m not very good at it (I lost interest at some point during high school), I was still able to follow a fair amount of the discussion. In any case, I was delighted to simply see such an entry, having long thought that Arabic and Islamic philosophy is not sufficiently well-known let alone appreciated among professional philosophers in the West. I looked up the author of the entry, one Mohammad Saleh Zarepour, and discovered he has a broad background encompassing both philosophy and “Theology and Religious Studies,” hence his corresponding wide range of research interests, which include “medieval Islamic philosophy, philosophy of religion, philosophy of language, philosophy of mathematics, and philosophy of logic.” Still exploring, I clicked on his Publications page and came across an open access article that immediately got my attention, “Islamic Problems and Perspectives in Philosophy of Religion.” For now I merely want to comment on two propositions: the first sentence of the article’s abstract, which is (i) in what follows, as well as a second proposition (ii) found on the page that contains the aforementioned article in conjunction with others as part of an effort to attract “the attention of contemporary philosophers of religion to Islam and the Islamic tradition as a rich source … worth considering [for] philosophical problems and approaches.” The papers selected are said to be “a collection of [the] best ever papers about Islam published in [the journal] Religious Studies.”
(i) “Contemporary philosophy of religion is excessively Christianity-oriented.” (from the abstract of one of the papers here)
(ii) “Islam and the Islamic tradition have been largely underrepresented in contemporary philosophy of religion.”
To the extent that (i) may be true, it is deeply troubling and inexcusable. However, there is in fact much philosophy of religion that is influenced by topics that arise in Indian and Chinese worldviews, and this goes back to my time in graduate school in the late 1980s (and teachers like Ninian Smart, Gerald J. Larson, Raimundo Panikkar, Herbert Fingarette, Nandini Iyer, among others). One can get an ample taste of this at two blogs: The Indian Philosophy Blog, and Warp, Weft, and Way, a Chinese (and comparative) philosophy blog. Nonetheless, I think (ii) is in fact generally true, thus “Islamic problems and perspectives” have indeed been comparatively neglected, apart from a few notable exceptions, such as the work of Oliver Leaman (and a few of our authors here). One still comes across introductory and other textbooks in the philosophy of religion that are either biased in favor of Christian themes and topics, or at least theistic in orientation (while excluding non-Abrahamic theistic traditions).
One antidote to this state of affairs exists with the emergence and consolidation of the field of “Comparative Philosophy,” which I view as a big tent capable of embracing philosophical material that is both religious and non-religious. Conceived in this way, I think it should subsume and thus eventually replace “Philosophy of Religion” as such. One reason to bring the religious and non-religious material together has to do with topics that are common to both domains, such as metaphysical, moral and ethical questions, as well as specific subjects, like personal identity, moral psychology, philosophy of mind and consciousness, human nature, values, truth, practical reasoning, art and aesthetics, and so forth and so on. Something like this was proposed over 25 years ago by Ninian Smart when he suggested in several books using the term “worldviews” to level the playing field as it were, in which case, Buddhism, scientific humanism, Existentialism, Personalism, Marxism, and Islam, for example, are all studied in the same department, in other words, it does not matter if a worldview is religious or not, or if a particular intellectual or philosopher is secular or religious, thus Mao and Ramakrishna, Marx and Tagore, Wittgenstein and the worldview of the Masai, all help us transcend the “false division between religion and philosophy” which has resulted in an “irrational [academic] division of the study of worldviews” (and, I would add, of ideologies and lifeworlds, the latter being the individual person’s body of values and beliefs as these are derived—accurately or not, from one or more of these).
We might be able to call upon a theory of truth and corresponding perspectival relativism (which does not mean ‘anything goes’) and thus pluralism by way of providing or at least attempting a plausible ontological or metaphysical and epistemic warrant for this field of Comparative Philosophy (or ‘Worldviews’). There might, one imagines, be different approaches to this endeavor, and I happen to favor one that relies in the main on works by such philosophers as Hector-Neri Castañeda, Michael P. Lynch, Nicholas Rescher, B.K. Matilal, and Hilary Putnam, as well as some ideas found in Jain and perhaps Buddhist doctrines. Of course we need not wait for such a warrant to be formulated or garner widespread support to embark on this particular pedagogical reform, but at the very least, as teachers or philosophers, we should assume that no one worldview can make a claim to absolute truth, that any worldview has, so to speak, a monopoly on truth, and thus all worldviews deserve a hearing, a forum, that their claims and views might be contested or debated or evaluated from positions outside that worldview. In brief, no one worldview or ideology is privileged over others, even if our descriptive, analytical and evaluative tasks are likely never to be completely impartial, wholly objective, free of all bias; and yet we should at least aspire to impartiality and objectivity, rely on reason and rationality and reasonableness, all the while remaining true to our ethical sensibilities and a moral compass.
This does not preclude us from cleaving to or identifying with or demonstrating fidelity to one particular worldview, or our fashioning of an idiosyncratic worldview or lifeworld (the latter being uniquely yours), one that might be far from systematic like “official” worldviews propounded by authorities of one kind or another in a particular tradition or school of thought. I would argue that empirically speaking, all individual worldviews depart in all manner of ways and in varying degrees from such “official” or authoritative worldviews, a fact which is sustained in the first instance when we inquire into individual lifeworlds. And while the individual assemblage or construction of a personal worldview or lifeworld may (and typically does) trouble “communitarians” who stress the fidelity to those worldviews which are christened crucial if not indispensable to one’s upbringing, to one’s moral, intellectual and emotional development, such construction or fashioning or choosing should be considered a possible, and in some societies or cultures, a likely consequence of reaching what we can—loosely—term moral psychological autonomy or perhaps the simply the “age of reason.” In other words, one reaches that time in life in which one can and indeed should be critical of worldviews and ideologies, be it one’s own or those of others, as one is now prepared to think for oneself, which of course does not take place in a vacuum nor need be an exercise in narcissism or a solipsistic enterprise. Thinking for oneself in this regard does not, to be sure, necessarily mean forswearing the worldview of one’s birth. And it might very well be the case that a young person who abandons the worldviews and traditions of his or her parents or those responsible for their upbringing may, at some point, become dissatisfied or troubled by the choice(s) they have (prematurely, immaturely, hastily) made and return to the worldview which nurtured them and originally formed and influenced their outlook on life, having learned and earned a newfound appreciation and understanding of same, which entails a deeper or more discerning awareness of the meaning, beliefs and values it previously provided them. I mention this not only because it as a theoretical or real possibility, but because we know, or at least some of us know, intimately, of cases just like this, cases where people come back to the traditions and worldviews most familiar to them.
In a future post I want to speak more about the form or nature or structure of worldviews, which are often mistakenly viewed (it may be only an assumption or presumption) as having more consistency, systematicity, or coherence than they in fact possess, even if the passage of time and the nature of traditions is such that “official” or authoritative worldviews, be they religious or not, often strive for such qualities and properties (hence ‘orthodoxy,’ which in some well-known worldviews frequently comes at the expense of ‘orthopraxis’) in response not only to criticisms from the inside but, and perhaps more importantly, in response to interactions with other worldviews and ideologies, be it in the form of dialogue or cultural exchange of one kind or another, or conflict that can range from the emotionally volatile to violent; in all cases, there is invariably explicit and implicit borrowing (some would claim ‘stealing’) and forms of influence that, historically, has led to worldviews which are never wholly or absolutely discrete, let alone, “original.” Globalization and kinds of cosmopolitanism were with us long before the Peace of Westphalia and capitalism, and this resulted in worldviews that invariably became more human if not humane. In speaking about worldviews qua worldviews, I will attempt to explain why, all things considered, it remains the case that we need or strongly desire worldviews—perhaps merely as “pictures”—that help us navigate our way about in the world, that help one make it through the day, that keep one sane, that provide one with a sense of meaning and purpose, a cluster of values as guiding lights, or, in the words of John Lennon,
“Whatever gets you through the night
It’s all right, it’s all right …
Whatever gets you to the light
It’s all right, it’s all right …. ”
I have been thinking of such matters afresh while reading Raymond Geuss’ book, Who Needs a World View (Harvard University Press, 2020) (In deference to the German term Weltanschauung, I prefer ‘worldview’ to ‘world view’). In that post, or perhaps a different one, I want to address the role of “narrativity” in more than a few worldviews. Once again, a philosopher, Galen Strawson, has moved me to speak on a topic which I’ve not thought about for some time, and what is more, has prompted me to revise my views on story telling or narrativity with his discussion of the “Psychological Narrativity Thesis” and the “Ethical Narrativity Thesis” in two somewhat polemical but no less incisive and well-argued essays, “A Fallacy of Our Age,” and “The Unstoried Life” (I will not discuss what he terms the ‘endurant’ and ‘transient’ forms of life, what he earlier defined, as kinds of self-experience, as ‘diachronic’ and ‘episodic’) in his book, Things That Bother Me: Death, Freedom, the Self, etc. (New York Review Books, 2018). By way of a conclusion, I leave you with a quote from my late teacher and friend, Ninian Smart:
“Critical analysis will suggest that we tend to live in a certain amount of aporia. Do we, when it comes to the crunch, really have a systematic worldview? We have an amalgam of beliefs, which we may publicly [and misleadingly or even falsely or disingenuously] characterize in a certain way. I may say that I am an Episcopalian, but how much of my real worldview corresponds to the more or less official views of the Episcopal Church? How much is in any case left out by an ‘official worldview’ which tells me nothing directly about cricket, being Scottish, have a certain scepticism about nationalism, thinking that there is life on other worlds, shelving the problem of evil [part of the theodicy question; in some societies, the problem of evil may translate into the problem of suffering] or other matters. Our values and belief are more like a collage [or perhaps bricolage] than a Canaletto. They do not even have consistency of perspective.”—From Smart’s (unfortunately titled) book, Religion and the Western Mind (State University of New York Press, 1987): 16-17.
Some Relevant Bibliographies
Posted at 09:07 AM in Patrick S. O'Donnell | Permalink | Comments (0)
You must know the parable about the frog that sits in a pot of water being gradually heated, allowing itself to be boiled alive: because the change happens gradually, it never realizes it should leap out. Reading Kathryn Paige Harden’s book The Genetic Lottery: Why DNA Matters for Social Equality (2021) is a similar experience, as the author ingenuously points out. ‘Like a frog being slowly boiled alive,’ she observes, readers follow her argument ‘from an uncontroversial premise to a highly controversial one.’
* * *
What follows is an edited selection of material from the NYRB review (April 21, 2022), “Why Biology is Not Destiny,” by M.W. Feldman and Jessica Riskin of Kathryn Paige Harden’s book The Genetic Lottery: Why DNA Matters for Social Equality (Princeton University Press, 2021). The sample of comments from motley members of the chattering classes on the Amazon page reveals something bordering on feckless enthusiasm for her argument. Even the well-known philosopher Peter Singer joins the chorus. After the snippets I have a brief comment.
[….] “[Harden] introduces many comfortably room-temperature premises: measurement is essential to science; people differ genetically; genes cause conditions such as deafness; a recipe for lemon chicken produces variable results but never leads to chocolate-chip cookies. Lulled to complacency by such anodyne and often homey observations, we soon find ourselves in a rolling boil of controversial claims: genes make you more or less intelligent, wealthier or poorer; every kind of inequality has a genetic basis. Harden is right that such assertions are controversial, but they’re nothing new. The idea of a biological hierarchy of intelligence arose alongside the first theories of human evolution. It never goes away when discredited, just changes forms. [….]
Biological essentialism, aimed at demonstrating an innate hierarchy of intelligence, is going strong after more than two centuries of empirical failure. There’s always a new approach waiting in the wings. This time it’s ‘genome-wide association studies’ of people’s ‘single-nucleotide polymorphisms.’ A single-nucleotide polymorphism (SNP) is a spot on the genome where people can have different variants: alternative nucleotides in their DNA. An average human has about 3.2 billion nucleotides and four million to five million single-nucleotide polymorphisms in their genome, and the genomes of any two people are about 99.9 percent the same. A genome-wide association study (GWAS) calculates a statistical correlation between patterns of DNA variants and a particular phenotype, or observable characteristic, among the sampled people. [….]
So far, these so-called polygenic indices haven’t indicated any therapeutic interventions, and their value is a matter of debate. But meanwhile, a growing number of social scientists, primarily in economics, psychology, and sociology, have seized upon the technique as a way of studying their own subjects. Social scientists engaging in ‘sociogenomic’ research exploit existing genetic databases, which have recently become cheap to produce and readily accessible, to conduct genome-wide association studies for ‘social-science-relevant outcomes’ such as the one Harden features most prominently in her book, ‘educational attainment.’ For a given life outcome—dropping out of high school, earning a Ph.D., having a teen pregnancy, becoming wealthy, going bankrupt—these writers claim they can use a genome-wide association study to generate a ‘polygenic index,’ or overall genetic score revealing a person’s likelihood of having that outcome.
Among other phenotypes associated with ’educational attainment’ for which Harden cites genome studies are ‘grit,’ ‘growth mindset,’ ‘intellectual curiosity,’ ‘mastery orientation,’ ‘self-concept,’ ‘test motivation,’ and especially ‘a trait called Openness to Experience, which captures being curious, eager to learn, and open to novel experiences.’ Harden doesn’t reveal just who calls this important trait ‘Openness to Experience’ or how they measure it. Surely, there must be disagreement among researchers about what constitutes this phenotype or others in the list, such as ‘grit.’ More so, at any rate, than about what constitutes macular degeneration.
Explaining how social scientists make genome-wide association studies and polygenic scores, Harden writes:
‘Correlations between individual SNPs and a phenotype are estimated in a “Discovery GWAS” with a large sample size…. Then, a new person’s DNA is measured. The number of minor alleles (0, 1, or 2) in this individual’s genome is counted for each SNP, and this number is weighted by the GWAS estimate of the correlation between the SNP and the phenotype, yielding a polygenic index.’
This alphabet soup in the passive voice implies that no one actively does all this estimating, measuring, counting, weighting, correlating—or that these are such technical processes that any human presence in them is irrelevant. But people are making interpretive decisions at every stage: how to define a phenotype and select people to represent it, how to count these people, which single-nucleotide polymorphisms to consider, how to weight and aggregate them. Interpretive decisions are of course essential to all science, but here there are a great many opinions dressed up in facts’ clothing. ‘This polygenic index will be normally distributed,’ Harden continues, now disguising an assumption—that there are intrinsic cognitive and personality traits whose distribution in a population follows a bell-shaped curve, a founding axiom of eugenics—as an objective fact. Harden then tells us that ‘a polygenic index created from the educational attainment GWAS typically captures about 10–15 percent of the variance in outcomes.’ All these trappings of scientific objectivity notwithstanding, a polygenic index ‘captures’ differences in educational outcomes the way Jackson Pollock’s Summertime painting captures the season: as a reflection of its creator’s radically subjective view of things (which is just fine for abstract expressionism).
If you find a magical hammer that, whenever you swing it, rewards you with funding and professional advancement, you look at your research area and see nothing but nails. Genome-wide association studies are the social sciences’ new magical hammer. Macular degeneration seems plausibly to be a nail: genomic analysis revealed two sets of single-nucleotide polymorphisms that were importantly associated with having the disease. Schizophrenia appears not to be a nail, though it might have some structural features a hammer could help with. The things social scientists have been swinging at aren’t just non-nails. They are to nails as ships to sealing wax, as cabbages to kings. To suggest that macular degeneration has genetic causes is to make an empirically testable proposal; to suggest that ‘grit’ or ‘openness to experience’ has genetic causes is to make a category mistake. These are interpretive descriptions, made of ideas, opinions, and practices, not molecules. [….]
Harden’s purpose in The Genetic Lottery is to popularize the claim that social inequalities have genetic causes, and to argue that if progressives want to address inequality, they’d better confront this fact. In presenting her case, Harden revives central features of the earlier, now-discredited biological theories of intelligence: the presentation of interpretive opinions as objective facts, as we’ve seen; spurious reduction to a biological mechanism that is not only hypothetical but unspecified; and a claim to be writing in the interest of social progress.
Regarding spurious reduction to an unspecified mechanism: although Harden pays lip service to the principle that correlation is not causation, she both implies and explicitly argues that correlations of genetic differences with social ones indicate genetic causes of social differences. When merely implying causation, she uses weasel words: genes are ‘relevant’ for educational attainment; they are ‘associated with’ first having sex at an earlier age; they ‘matter’ for aggression and violence; social and economic inequalities ‘stem from’ genetics. Harden also says it directly: genes ‘cause’ differences in educational outcomes; genetic differences ‘cause’ differences in social and behavioral outcomes; a ‘causal chain’ links a genotype with the social behavior of going to school, and another such chain joins genetics to performance on intelligence tests. [….]
Such talk of entanglements and braids is misleading, implying that genetics and environment are discrete strands, when in fact living things are in continual interaction with their environments in ways that transform both at every level. The late Harvard evolutionary biologist and geneticist Richard Lewontin used the concept of the “reaction norm”—a curve expressing the relation between genotype and phenotype as a function of the environment—to describe this interaction and its implications. Lewontin showed that since the relationship between genotype and phenotype depends on the environment in which the phenotype is measured, one can’t infer genetic causes from correlation and regression calculations. Harden mentions Lewontin as a critic of behavioral genetics, but she implies that he didn’t approve of the field simply on ideological grounds. She never mentions or engages with his substantive refutation of the core assumption that genetic and environmental causes of behavior are separable.
With an admirable poker face, Harden writes that what behavioral geneticists really care about is environment: they want to identify the genetic causes of different life outcomes just to get them ‘out of the way, so that the environment is easier to see.’ This is impossible, even as an ideal, because the environment is in the genome and the genome is in the environment. We can no more unbraid genetics and environment than we can unbraid history and culture, or climate and landscape, or language and thought.
Progressives, Harden says, shouldn’t be afraid to acknowledge genetic causes of inequality; instead, they should work to narrow ‘genetically associated inequalities’ with programs specially benefiting the genetically disadvantaged. She implies it’s a new departure for a political progressive to espouse the idea of inherent differences in intelligence, but in fact scientists arguing for a biological hierarchy of intelligence have traditionally invoked progressive values. Harden indeed sounds like Spencer, who said his science would help rectify ‘ignorant legislation’ and ‘rationalize our perverse methods of education.’
Just how can behavioral genetics serve the interest of social progress toward greater equality? Harden never says. She does mention three examples of programs or policies that she claims have helped to rectify natural imbalances in intelligence, but none involve genomic analysis. [….]
Harden was right to compare her reasoning to the reasoning of the frog boilers. Both the logic and the experimental program of frog boiling exemplify the essentialist tradition in which she is a participant. But the theory doesn’t hold up in experiments: the frog, if intact and in a vessel it can escape, will actually jump out rather than be boiled alive. Our message to you, reader, is accordingly simple: jump out.”
* * *
This sort of sullied science, while perhaps at points rhetorically shrewd (when not vague or ambiguous), involves both speculative reductionism and deterministic behaviorism, including rather simple-minded conceptions of human nature and agency, as well as socio-cultural and economic circumstances and situations. Instances of such sullied science stubbornly persist, periodically reappearing to make extravagant theoretical claims while assuming an unwarranted scientific confidence which in part stems from a surprising dispositional failure to properly digest the history of science. Thus we need to remind ourselves of earlier arguments made by Stephen Jay Gould, R.C. Lewontin, John Dupré, Hilary Rose, Steven Rose, Mary Midgley, Philip Kitcher, among others. Our co-authors do a fine job in steering us in the right direction. Some form of “reductionism” is unavoidable in the natural and especially social sciences, yet, as Jon Elster notes, objections to reductionism “can be well founded” if they are “prefixed by ‘premature,’ ‘crude,’ or ‘speculative.’” He proceeds to provide examples in the literature (e.g., mechanistic physiology, natural selection as an analogy for social phenomena, sociobiology, evolutionary psychology…; cf. too what Deidre McCloskey once termed the ‘secret sins of economics’ which, thanks to her and others, are no longer ‘secret’). Alas, “The idea of a biological hierarchy of intelligence arose alongside the first theories of human evolution. It never goes away when discredited, just changes forms.”
Further Reading (bibliographies and other material)
Posted at 09:00 AM in Patrick S. O'Donnell | Permalink | Comments (0)
“These days it feels hopeless, even pathetic, to go on looking back at the Arab Spring. Yet a minority of Egyptians remain committed to its memory and ideals. [….] Human rights groups estimate that since the coup the Sisi regime has detained 60,000 political prisoners—so many that it has had to build a network of secret prisons, where torture is endemic. One of the few organizations that has monitored the number of detainees and prisons in Egypt, the Arab Network for Human Rights Information, announced on January 11 that it was closing because of unsustainable levels of harassment from the state. Most political prisoners are accused by the authorities of being members of the Muslim Brotherhood. But all of Egypt’s most prominent secular and liberal activists have faced prosecution, imprisonment, abuse, surveillance, or exile. [….] In January the US government withheld from Egypt $130 million in aid because of its dismal human rights record, while simultaneously authorizing more than $2 billion in arms sales.” Please see Ursula Lindsey’s review essay in the NYRB, “Refusing Silence in Egypt” (April 21, 2022 issue).
Having been among those who at the time was moved and inspired by the “Arab Spring,” it is painful to read and recall the subsequent history in Egypt and most of the other countries that basked in the light of these protests and uprisings; that found their citizens moved to struggle courageously on behalf of democratic revolutionary hopes and dreams. One thing we cannot do is let the powerful and ruthless dictators and autocrats, the anti-democratic thugs who’ve since come to power, to erase the truth and reality of those times. Of course there are many other things we should do to oppose authoritarian suppression, coercion and violence, to combat the systematic violation of basic human rights, be they civil and political or economic, social and cultural. But we live in a time when too many people lack any consistent or coherent sense of fundamental values and purposes, when too many people feel, or at least act as if, they are overwhelmed by the myriad events, be they political or “natural,” that define this apocalyptic-like period of human history, when many if not most of us are in denial, well-versed in the art of self-deception, and dispositionally prone to indulging in the crudest of ideologies, the vilest of myths, the darkest phantasies. Yes, there are exceptions to these generalizations, but one suspects they will suffer the fate of Pascal’s red balloon. The question is whether or not the next chapter, or the end of our story, will by anything even remotely resembling the end of Albert Lamorisse’s 1956 French film, Le ballon rouge.
Relevant Bibliographies
Posted at 01:27 PM in Patrick S. O'Donnell | Permalink | Comments (0)
I was moved to post this after reading the Just Security article I earlier shared, “Hijab Bans, Hindutva, and the Burden of Hindsight: Why Global Leaders Must Act to Prevent Genocide in India.”
“Incitement has been a precursor to, and a catalyst for, modern genocides. It may even be a sine qua non, according to witnesses and the abundant historical and sociological literature on the topic.” — Susan Benesch
In the study of genocide, especially among practitioners and theorists of international criminal law, the notion of “incitement to genocide” (which must be distinguished from ‘hate speech’ as that is now legally and politically defined: such speech has been criminalized in domestic law but not international law) has become a plausible or credible idea, even if it is sometimes referred to or invoked too casually or loosely (as when Mahmoud Ahmadinejad, then President of the Islamic Republic of Iran, was accused of incitement to genocide based on remarks he made about Israel).
In what follows I have drawn primarily upon two sources: an article by Susan Benesch: “Vile Crime or Inalienable Right: Defining Incitement to Genocide,” Virginia Journal of International Law (2008), Vol. 48: 3, 485-528, as well as some of my notes from a chapter in Larry May’s book, Genocide: A Normative Account (Cambridge University Press, 2010).
The principal elements of incitement to genocide:
Professor Benesch proffers the following six-prong inquiry by way of distinguishing the crime of incitement to genocide from hate speech:
* In the words of Benesch, “To commit incitement to genocide, a speaker must have authority or influence over the audience, and the audience must already be primed, or conditioned, to respond to the speaker’s words. Incitement to genocide is an inchoate crime, so it need not be successful to have been committed, but it would be absurd to consider a speech incitement to genocide when there is no reasonable chance that it will succeed in actually inciting genocide. And to prosecute a case like this would be a needless (and possibly harmful) restriction of the right to free speech.”
The following snippets are from my notes to Chapters 10 and 11, “Incitement of Genocide and the Rwanda Media Case,” and “Instigating, Planning, and Intending Genocide in Rwanda,” in Larry May’s Genocide: A Normative Account (Cambridge University Press, 2010): 180-220. At a later date I hope to compose these into proper sentences and paragraphs!
Relevant Bibliographies
Posted at 05:55 AM in Patrick S. O'Donnell | Permalink | Comments (0)
I will later have an introductory post on the elements of “incitement to genocide” in international criminal law. “An extreme form of hate speech [although legally distinguishable from same], incitement to genocide is considered an inchoate offense and is theoretically subject to prosecution even if genocide does not occur, although charges have never been brought in an international court without mass violence having occurred. ‘Direct and public incitement to commit genocide’ was forbidden by the Genocide Convention in 1948. Incitement to genocide is often cloaked in metaphor and euphemism and may take many forms beyond direct advocacy, including dehumanization and ‘accusation in a mirror’ [i.e., ‘falsely imputing to your adversaries the intentions that you have yourself and/or the action that you are in the process of enacting,’ what in psychology is termed ‘projection’]. Historically, incitement to genocide has played a significant role in the commission of genocide, including the Armenian genocide, the Holocaust and the Rwandan genocide.”
* * *
The following is from a piece posted a couple days ago at Just Security, “Hijab Bans, Hindutva, and the Burden of Hindsight: Why Global Leaders Must Act to Prevent Genocide in India.” See the original for the entire article, including the many embedded links.
“In the nearly four months since the United States Holocaust Memorial Museum’s Simon-Skjodt Center for the Prevention of Genocide released its report ranking India as being at the second-highest risk of a new mass killing, the systematic targeting of Muslims in the country has only escalated to greater extremes.
On Mar. 15, the Karnataka High Court upheld a hijab ban imposed by some educational institutions in the southern Indian state, a stronghold of the Hindu-nationalist Bharatiya Janata Party (BJP). The ruling followed the High Court’s interim order prohibiting religious attire in schools, issued in February after weeks of escalating religious tensions in Karnataka. In early January, a government-run women’s college in the city of Udupi banned students from wearing hijab in classrooms; Muslim students who attempted to attend their classes while wearing hijab were denied entry. Other colleges in the state soon imposed similar bans, and the government of Karnataka subsequently issued an order in support of those bans. Protests decrying the hijab prohibition and counter-protests by male students in saffron scarves (saffron being traditionally associated with Hinduism, but today largely associated with the BJP and right-wing Hindu ideology) sparked violence, resulting in the closure of all high schools and colleges throughout the state for three days. Five Muslim women students filed a constitutional challenge to the state government’s order, calling on the High Court to restore their rights. In finding that the order did not violate Muslim women’s constitutional rights, the High Court—arguably reaching far beyond its expertise and jurisdiction—undertook, in its opinion this past Tuesday, a reading and interpretation of the Quran and books on Islam to argue that hijab is not religiously mandated.
The hijab controversy comes on the heels of what has been perhaps the most direct call yet for the genocide of Muslims, in order to transform India into a Hindu nation, at a religious convening in the northern city of Haridwar in December. Emboldened with a sense of impunity resulting from ongoing complicity by law enforcement officials and political leaders turning a blind eye to, or even participating in, attacks on Muslims, Hindutva (right-wing Hindu-nationalist) leaders called on Hindus to take decisive action towards the establishment of a Hindu nation.
During the three-day gathering in Haridwar, prominent Hindu supremacist Yati Narsinghanand Giri, told the crowds, ‘You need to update your weapons . . . More and more offsprings and better weapons, only they can protect you.’ Another speaker said, ‘Even if just a hundred of us become soldiers and kill two million of them, we will be victorious . . . If you stand with this attitude only then will you able to protect “sanatana dharma” [Hinduism].”’
Another outspoken Hindutva leader, Swami Prabodhananda Giri, who has close connections to BJP leadership, cited the genocide of the Rohingya in Myanmar as an illustrative example for Hindus to follow. Two weeks later, during an appearance in the city of Ghaziabad, he followed up his remarks with the statement, ‘We will stand up against every jihadi in India and clean the country of their presence,’ defining a jihadi as those who have read and understood the Quran.
Although Yati Narsinghanand Giri and another speaker were arrested (under Sections 153 and 298 of the Indian Penal Code) for their comments at the Haridwar event, the question has been raised as to why the explicit calls for violence against Muslims were not met with charges under the Unlawful Activities (Prevention) Act or the National Security Act, even while provisions of these acts have been used to target Muslims, such as in the case of journalist Siddique Kappan. Meanwhile, similar events to the Haridwar convening are slated to be held all over the country.
The hijab ban and the Hindutva convenings inciting violence are part of a broader and ongoing targeting of Muslims in India, which has manifested in numerous ways, including the creation of online apps to ‘auction’ Muslim women; the 2019 Citizenship Amendment Act (CAA) which offered a path to citizenship for individuals from persecuted religious minority groups, with the exception of Muslims; incitement of violence by BJP leaders against Muslims in the 2020 Delhi riots, which stemmed from protests against the CAA; vigilante attacks on Muslims in the name of protecting cows; ‘love jihad’ laws that aim to prevent Muslim men from marrying Hindu women and ban ‘unlawful’ religious conversions in the context of interfaith marriage (seen most recently in a bill tabled in the Haryana Assembly in early March); and the disenfranchisement and persecution of Muslims in Assam. [….]
Hindutva rhetoric targeting Muslims, including at the Haridwar gathering, has included the exaltation of Nathuram Godse, who assassinated Mahatma Gandhi based on a perception that Gandhi was too pro-Muslim and betraying Hindus. Demonstrating similar sentiment, the Indian Ministry of Culture last year tweeted a birthday tribute to M.S. Golwarkar. From 1940 to 1973, Golwarkar led the right-wing, Hindu-nationalist Rashtriya Swayamsevak Sangh (RSS)—of which Godse had also been a part—and was initially arrested for Gandhi’s assassination. Golwalkar in his book, Bunch of Thoughts, glorified Hitler and cited Nazi Germany as a model for eliminating minorities.
Through the steady proliferation of hateful, anti-Muslim rhetoric, the weaponization of religious identity for political gain, and the literal call to arms against their own citizens, there has been pindrop silence from Modi and other top BJP leaders. Undoubtedly, much of the recent escalation in the fueling of religious tensions and reassertion of the narrative of a Hindu India is directly tied to the Uttar Pradesh elections—seen as a barometer for national elections to be held in 2024—which just concluded and saw a retention of BJP power under the divisive Hindu-nationalist monk and Chief Minister Yogi Adityanath. However, the tacit endorsement by BJP leadership of using any means necessary for political gain—including mobilizing concrete action to make India a Hindu nation—may have consequences that far outlast the elections. The Haridwar convening showcased layers within the Hindutva machinery, with what are known as the ‘Trads’ being more extreme and direct in their calls for extermination of minorities, and the ‘Raitas,’ who include diehard BJP supporters who engage in other forms of hate speech. The flames that are stoked now under the watch of the Raitas, by currently BJP-aligned Trads, may become impossible to extinguish later.
In addition to India’s own leadership, the United States has also been notably silent. Before U.S. relations with India began devolving as a result of India’s mulishness in refusing to join other democratic nations in condemning Russia, the Biden administration looked the other way—failing to acknowledge India’s crackdowns on free expression and dissent, persecution of religious minorities, and overall shift towards authoritarianism—and continued to bolster India as its key ally and partner in the Indo-Pacific region. In a Congressional briefing in January, the founding president of Genocide Watch, Gregory Stanton, urged ‘the U.S. Congress to pass a resolution that warns genocide should not be allowed to occur in India,’ and for President Biden to warn Modi of the potential implications of a genocide on U.S.-India relations. The obligation on the United States to intervene diplomatically is not only a moral one; a genocide in India would cause political upheaval with potentially disastrous and long-lasting consequences for the security of the region.” [….]
Further Reading
Relevant Bibliographies
Posted at 09:57 AM in Patrick S. O'Donnell | Permalink | Comments (0)
[What follows is something I first wrote in 2004 and have since slightly revised. I hope to further revise it anon, but I want to share it first to see if there are any comments or suggestions that might be of help in that endeavor. One part of the revision will attempt to address a few of the arguments made in Umut Özsu’s chapter, “An Anti-Imperialist Universalism? Jus Cogens and the Politics of International Law,” found in Martti Koskenniemi, Walter Rech, and Manuel Jiménez Fonseca, eds. International Law and Empire: Historical Explanations (Oxford University Press, 2017): 295-313.]
* * *
“The Vienna Convention on the Law of Treaties discusses what are called jus cogens norms, norms that cannot be overridden even by express treaty. Given their place in international law, jus cogens norms are sometimes equated with constitutional principles in a domestic legal system [arguably and in particular, the notion of ‘human dignity’ found in many constitutions in particular appears to stand apart in this respect]. Article 52 of the Vienna Convention provides that a ‘a norm of jus cogens must satisfy three tests: the norm must be (a) “accepted and recognized by the international community of States as a whole” as a norm from which (b) “no derogation is permitted,” and which (c) “can be modified only by a subsequent norm of general international law having the same character.”’ Jus cogens norms are norms from which no derogation is permitted, and hence seemingly norms that sit at the apogee of international norms, and for which are obligations erga omnes, that is, obligations on everyone.” — Larry May (quoting from Ragazzi [see the references], Aggression and Crimes Against Peace (Cambridge University Press, 2008): 152.
* * *
Jus Cogens (L., ‘compelling law’) is a peremptory (mandatory) norm of general international law that permits no derogation. Such a norm or rule has plausibly—but I think inaccurately—been viewed as one of the “general principles” of law (or part of the ‘common judicial ethos of civilized states’) that falls within the hierarchically ordered sources of international public law: in other words, in some sense ranked behind treaties (or ‘conventions’) and the customary practices of states (‘consensual’ sources), yet placed before the judicial decisions and writings of the publicists (‘nonconsensual’ sources) (Article 38 of the Statute of the International Court of Justice). However, while this is not an altogether an accurate or felicitous categorization, it does allow us to consider the conceptual origins and properties of such a norm as “nonconsensual” in the first place, if only because jus cogens reflect the stubborn persistence of fundamental facets or features of Natural Law philosophy and principles, even if we choose to abandon or ignore some of the metaphysical presuppositions and assumptions (especially those that are manifestly religious in character, that is, understood as belonging to, as it were, a particular religious tradition) specific to Natural Law traditions.
This historical and conceptual link to Natural Law formulations accounts for the status of jus cogens norms (the apparent redundancy—‘compelling law’ norms—being morally suggestive) as overriding principles in the international legal system, their conspicuous indelibility, and the fact that, in Brownlie’s words, “more authority exists for the category of jus cogens than exists for its particular content” (Brownlie 1998: 516-17). In part this is owing to the generality and somewhat abstract character of Natural Law propositions, which are often open-ended, a feature viewed, depending on one’s legal perspective, as either a virtue or a vice. In one sense, jus cogens norms, in keeping with the Natural Law assertions from which they are derived, are in the first instance about how human beings must be treated if we are to assign enduring and universal status to men and women as, minimally, rational and moral agents capable of individuation and, given the right conditions and circumstance, self-realization as well (the latter need not be conceived in religious terms, as the Marxist conception of same reveals). At the same time, this serves as an affirmation of the ongoing relevance of moral principles and the corresponding value and rational assessment of international law norms. Jus cogens norms represent an unabashed, arduous and ambitious attempt to combine the notions of “is,” “ought,” and “can” while asserting or implying the claim that human beings are, at least for overarching legal purposes (and whatever else he may be), both freedom-loving and justice-seeking, creatures possessed of intrinsic dignity, thus the locus of moral capacities or powers rendering them worthy of being accorded fundamental human rights so as to recognize, respect, and protect human dignity, the pursuit of freedom and justice, all of which is indissolubly bound up with our potential and actual moral capacities, virtues, and principles.
The relatively recent invocation of such norms in international law suggests a genuine moral endeavor to canalize the legal system’s ability to express an urgent and heartfelt response to the comparatively rampant barbarism and enormities of evil (aerial and atomic bombing, genocide, famines, war crimes …) practiced by more than a few (democratic and otherwise) nation-states possessed of an intellectual, technocratic, and political hubris invariably associated with rather crude notions of reason, progress, and modernity in the twentieth century and often sullied with the histories of colonialism and empire. The collective crimes of these states were often committed under the putative warrant or cover of raison d’état and frequently tied to ill-understood or morally evasive “dirty hands” apologetics or justifications and ideologically self-serving conceptions of “realism.” Despite its fundamental nonconsensual nature or essence, jus cogens is not, in fact, on the same footing as what are known as “general principles of law,” be it, for instance, collateral estoppel, reparation for damage, or equity (here, in its ordinary language or principle of justice sense); for such principles, found within municipal law (i.e., domestic legal systems), largely serve as “gap fillers” or supplementary rules of international law. Enumerating a hierarchical priority of application with respect to sources is irrelevant to jus cogens norms, or, if one prefers, jus cogens norms are an anomaly or exception to this schema in as much as they can invalidate (trump) other bodies of rules, those generated by treaty or custom, for example, and therefore transcend, morally speaking, hierarchical schemes of the sources of international law. At the same time, a jus cogens norm, by definition, implication, and possible institutionalization, makes explicit reference to those generalizable values and minimal moral principles infrequently and inchoately expressed and realized on occasion in the international legal system itself.
Examples of jus cogens rules are varied (and candidates for such status even more so) and, at times, vigorously contested, but commonly cited exemplars would include the following: the United Nations Charter’s prohibition on the use of force (Art. 2.4); the laws of genocide; the principle of racial non-discrimination; some human rights (e.g., the right to life and freedom from torture); general rules on (collective) self-determination; and crimes against humanity. The formal function and practical effect of jus cogens is a clear and decisive legal delimitation of and thus necessary constraint on the scope and substance of State sovereignty. If jus cogens trumps the will or consent of contracting or colluding States, it formally and importantly functions on the order of an international constitutional constraint, constraining, that is, the possible or actual behavior of States to the degree that such behavior egregiously detracts from, erodes, or subverts the constituent elements of global public order and security, transnational civil society, or the common good of the international community. Minimally, its articulation may have a deterrent effect, as states seek to avoid shunning and ostracism in the international community, or informal and formal sanctions from the dominant states. Depending on one’s vantage point, the vagueness or open-ended texture of jus cogens is either political liability or ethical leverage, possibilities obscured somewhat by the putative “positivization” of jus cogens in the Vienna Convention on the Law of Treaties (1968, 1986), which includes provision for resort to the International Court of Justice (ICJ) in the event of intractable disputes as to the norm’s precise content or specific application. The definition of a peremptory norm provided by the Convention looks forward to its acceptance and recognition by the “international community of States as a whole.” Much rides on the precise interpretation of this clause. Apart from the vague formulation, it helps to explain why some insist on seeing jus cogens as an admittedly “higher” exemplification of, if not simply and merely, customary international law. A. Mark Weisburd, for example, goes to the heart of the problem: “a concept that originated in the belief that moral principles imposed legal limits on state authority—in effect, applying a natural law approach—was codified in a form that grounded limitations on states’ freedom solely on the acceptance of those limits by states, that is, in a form shaped to satisfy positivist conceptions of the nature of law” (Weisburd 2002: 20).
Yet legal validation through partial codification should not be confused with the use of jus cogens by jurists to shape the international legal system to fundamental values and ends like the pursuit of justice and the promotion of public good: jus cogens’ conceptually intrinsic moral aspirations are not the same as or reducible to the conditions of its validity; nor will it do to derisively dismiss its increased invocation and application in the world community as merely “rhetorical.” Courts continue to act as if the content of jus cogens’ is not confined to customary international law as created by the practice of states. Moreover, as noted by de Aréchaga, the Convention’s definition confuses the legal effects of the rule with its intrinsic nature: “…it is not that certain rules are rules of jus cogens because no derogation from them is permitted; rather, no derogation is allowed because they possess the nature of rules of jus cogens” (in Cassese 2001: 140).
Jus cogens norms can abide by the empirical possibility that, on occasion, “might makes right” (e.g. humanitarian intervention in cases, say, of crimes against humanity by powerful states such as members of the UN Security Council), but they cannot endorse this proposition as a normative prescription (as occurs with ‘victor’s justice’). One of the assumptions of those drawn to a non-positivist conception of jus cogens is the cogency and desirability of a moral theory of international law (one generated by something similar if not identical, transnationally speaking, to a Rawlsian ‘overlapping consensus’), an assumption explicitly denied by those Realists who “typically draw a meta-ethical implication from their descriptive-explanatory-theory: broadly, that morality is inapplicable to international relations” (Buchanan and Golove 2002: 873).
Unlike the Realist, and like Natural Law principles in general, jus cogens “places the burden of proof squarely upon those who wish to justify murder or torture, untruth or inequality, rather than upon those who wish to invoke the sacred right to life, liberty, to truth, and to a measure of equal respect”(Iyer 1979: 60). The formal natural law properties of a jus cogens norm (metaphysical, but not necessarily religious) means that there is an inescapably logical incompleteness to any specific formulation or codification, no doubt evidence for those eager to conclude the concept lacks operative force and practical import, and perchance one reason for not a few writers to conclude (implausibly, in my view) that “the status of jus cogens as an element of international law is quite confused”(Weisburd 2002: 25). This goes some distance in accounting for the fact that “partly because of its perceived potency, a peremptory norm is more difficult to prove than is a usually controversial rule of customary international law” (Janis 1999: 64).
We await a more thorough philosophical and moral examination of international law by legal theorists that might clarify the Natural Law (or Natural Law-like) assertions presupposed or assumed by any given jus cogens rule. A filling out of the Natural Law propositions that buttress a proposed jus cogens norm would clarify its moral significance and legal content, while prompting the search for a stable transnational rational and moral consensus of citizens and jurists alike as a prelude to further successful institutionalization. The worthiness of any proposed peremptory norm would thereby make explicit the ratio decidendi if you will, that warrants jus cogens status. And it could further serve to allay current fears about the possible political abuse or misuse of jus cogens.
The assertion and justification of jus cogens is logically and legally prior to the determination of obligatio erga omnes (obligation towards all, either distributively or collectively), that is, obligations that are universal and therefore binding on all States. As a jus cogens norm by definition concerns values and goals fundamental to the entire international community “in view of the importance of the rights involved” (Barcelona Traction case), its legal corollary is an obligation erga omnes. International crimes that rise to the level of jus cogens create obligations erga omnes that are non-derogable, including, but not limited to, the following: a duty to prosecute or extradite, the nonapplicability of statutes of limitations, and universal jurisdiction over perpetrators of such crimes. The Rome Statute of the International Criminal Court, which entered into force on July 1, 2002, aims to safeguard the rights and obligations of jus cogens norms in that domain of international law (at least for parties to the statute).
Both jus cogens and obligations erga omnes permit us to appreciate the oft-forgotten truth that “International law is much more than a simple set of rules. It is a culture in the broadest sense in that it constitutes a method of communicating claims, counter-claims, expectations and anticipations as well as provide a framework for assessing and prioritizing such demands” (Shaw 1997: 53). Therefore any philosophical and corresponding legal treatment of jus cogens (and obligations erga omnes) will need to untangle the analytic, epistemic, and moral issues and questions that properly belong to democratically sensitive international law. And any institutionally sensitive moral theory of international law should assist jurists in clearing up the apparent or alleged conceptual confusion that clings to jus cogens. It will not thereby eliminate, however, the disagreements and conflict that are part and parcel of attempts to further incorporate such peremptory norms into international law, attempts that betray a longing for, if not presaging, what has been variously termed world or global law, transnational or cosmopolitan law.
Posted at 03:08 PM in Patrick S. O'Donnell | Permalink | Comments (0)
I cringe when I hear those committing indefensible, horrific acts of violence against their fellow human beings described as “acting like animals,” “behaving like beasts,” and so forth. No less than Robert H. (Justice) Jackson* delivered an address at the annual meeting of the American Bar Association on Oct. 2, 1941, “The Challenge of International Lawlessness,” in which he compared the international legal order among the world of nation-states to “the law of the jungle” in contrast to desirability if not possibility of an “international order based on reason and justice.” Justice Jackson was speaking just prior to the U.S. entry into WW II, a war that followed the “war to end all wars,” that is, World War I. A few months after hid address of course, the U.S. was officially at war with both Japan and Germany, prompted by the bombing of Pearl Harbor. The barbarism of these world wars was in no way, literally or metaphorically, representative or indicative of “the law of the jungle.” Only human animals—not our fellow creatures in the animal kingdom—individually and collectively, behave so ruthlessly and destructively, so aggressively and violently, with each other. The exaltation of reason, with seeds in the Renaissance and at its apogee during the European Enlightenment, need not, and should not have occurred at the expense of disparagement and ill-treatment—as ‘things’ or ‘property’—of nonhuman animals (it should be noted that such Enlightenment philosophers and intellectuals as Montaigne, Paine, Voltaire, Bentham, and J.S. Mill did in some respects speak on behalf of extending at least humane consideration to animals).
We should assiduously avoid using derogatory, damning, and false descriptions or characterizations of nonhuman animal life and behavior to describe what is peculiarly and conspicuously all-too-human in its violence, barbarism, and destructiveness. The word “brute” is defined “as characteristic of an animal in quality, action, or instinct,” that is, one who behaves in a savage or cruel manner; but that is more characteristic of human animals than nonhuman animals. It is human beings that are uniquely capable of exemplifying what it means to act as “brutes,” in other words, it is they—us—not animals, that frequently exhibit brutal behavior, be it in our relations with other human beings or nonhuman animals.
* “Robert H. Jackson (February 13, 1892 – October 9, 1954) was an American attorney and judge who served as an Associate Justice of the United States Supreme Court. He had previously served as United States Solicitor General and United States Attorney General, and is the only person to have held all three of those offices. Jackson was also notable for his work as Chief United States Prosecutor at the Nuremberg trials of Nazi war criminals following World War II. [….] In 1945, President Harry S. Truman appointed Jackson (who took a leave of absence from the Supreme Court), as U.S. Chief of Counsel for the prosecution of Nazi war criminals. He helped draft the London Charter of the International Military Tribunal, which created the legal basis for the Nuremberg Trials. He then served in Nuremberg, Germany, as United States Chief Prosecutor at the International Military Tribunal.”
Posted at 06:28 AM in Patrick S. O'Donnell | Permalink | Comments (0)
I am reading Larry May’s discussion of traditional jus ad bellum and jus in bello principles in his book Aggression and Crimes Against Peace (Cambridge University Press, 2008), which begins with an examination of the ideas of Alberico Gentili (wrote at end of 16th century) and Hugo Grotius (wrote at beginning of 17th century). Among the tidbits I just learned: (i) Gentili “sets the stage for the Bush Doctrine of seeing war as a legitimate means of self-defense even if there is little or no evidence that a danger is impending [or ‘imminent’];” (ii) Grotius, often considered the “founder of international law,” had no formal training in law, although he did have “an important professorship at Leiden in the Netherlands;” (iii) and that, “like Gentili, Grotius also represented various States in international disputes, most famously defending the Dutch for seizing pirate ships that contained vast fortunes that had been stolen from other European countries and not returning the stole goods to those countries” (reminding one of the bad behavior in our time and place of art galleries, auction houses, and museums with respect to items with known or possible criminal pedigree). This is but a mere taste—on the tip of the tongue—of the intellectually savory material set out by May.
May’s treatment is largely within (‘largely’ because at time it departs from) the Just War tradition of morality, law, and politics in the West, which evolved out of Christianity and Islam, the former commencing with Augustine. Nonetheless, questions of war and ethics first arose in the Indic/Indian civilization (what is misleadingly reduced to what is now known as ‘Hinduism’), then among ancient Israelites and the Chinese. But it is in Christianity and Islam that we find a more explicitly articulated and consistent moral and legal tradition about what constitutes “just war,” which might be loosely defined as the search for a middle ground between absolute pacifism and “anything goes.” Although the “just war” tradition arose within religious worldviews, like morality and ethics generally it eventually became distinguished as a secular form of (moral and legal) discourse on the justification of war as well as the moral and legal principles deemed necessary to constrain the conduct of any war, be it justified in the first instance or not. In contemporary international humanitarian law (IHL), as noted by one of its foremost authorities, the International Committee of the Red Cross (ICRC),
“Jus in bello regulates the conduct of parties engaged in an armed conflict. IHL is synonymous with jus in bello; it seeks to minimize suffering in armed conflicts, notably by protecting and assisting all victims of armed conflict to the greatest extent possible. IHL applies to the belligerent parties irrespective of the reasons for the conflict or the justness of the causes for which they are fighting. If it were otherwise, implementing the law would be impossible, since every party would claim to be a victim of aggression. Moreover, IHL is intended to protect victims of armed conflicts regardless of party affiliation. That is why jus in bello must remain independent of jus ad bellum.”
Thus while distinguishable, and for many purposes independent (May shows how the moral and legal questions sometimes blur this distinction, thus jus in bello and jus ad bellum are in places and at times demonstrably and unavoidably related to each other), IHL focuses on jus in bello, while contemporary international criminal law, especially in light of the United Nations Charter (1945), concerns itself with jus ad bellum, or what is today termed “aggression and crimes against peace.”
I have introduced these terms so as to make it easier to follow more discussion and arguments from May’s book that I hope to soon share.
Further Reading (a very select list)
For more titles, please see these two compilations: (i) International Criminal Law and (ii) Violent Conflict and the Laws of War, freely available on my Academia page.
Posted at 03:39 PM in Patrick S. O'Donnell | Permalink | Comments (0)
My latest bibliography is on housing. Here is the introduction:
While the focus is primarily on housing issues (and related topics: urban planning, architecture, and homelessness, for example) in the U.S., there are more than a few titles about housing elsewhere around the world. Like most of my compilations, there are two principal constraints: books, in English (which includes titles translated from other languages). (Please note: although I typically provide the first published date of a title that is often hardbound, there are frequently cheaper, paperback versions available as well.)
Posted at 09:46 AM in Patrick S. O'Donnell | Permalink | Comments (0)
First, permit me to state that I wholeheartedly and unreservedly, believe in the value and necessity of international criminal justice and law. The foremost reasons for this belief include the moral, philosophical and legal explanations and arguments proffered on the subject by the philosopher Larry May in the following (this is not a complete list):
Incidentally, May treats the crime of torture in international criminal law in the above book on War Crimes and Just War (2007), noting that “Torture and other forms of degrading treatment have been condemned by all the relevant documents in international law for over a century.” In a later post I will share some more titles on international criminal justice and law.
That said, I concede that international criminal law proceedings have often amounted to what Danilo Zolo describes in words from the title of his book, Victors’ Justice: From Nuremberg to Baghdad (2009). Thus, and for example, the U.S. and its allies, notably Great Britain (through the British Royal Air Force or RAF), and at first under the euphemism of “precision bombing,” engaged in targeted and utterly indiscriminate (thus not only ‘disproportionate’) bombing of civilians in both Germany and German-occupied cities that included Allied-citizens (e.g., Paris, Nantes, and Amsterdam). Presumably all of us are also familiar with the unnecessary and unjustified atomic bombing of Hiroshima and Nagasaki. The U.S. Army Air Force (USAAF) began its bombing campaign against Japan in late 1944:
“According to Henry Arnold and Curtis LeMay, bombing civilians was essential in order to break Japanese morale, and this was the quickest way to force them to surrender. At the same time, it was the most efficient method to minimize casualties to their own men. In this sense, Arnold, LeMay and other U.S. military leaders inherited the idea of strategic bombing originally advocated by RAF [British Royal Air Force] leaders in World War I. According to this concept, the killing of enemy civilians is justifiable, no matter how cruel the method; indeed it is indispensable to hastening surrender. U.S. leaders, however, in their public pronouncements, continued to insist that their bombs were directed toward military targets. Consider, for example, President Harry Truman’s announcement immediately after the bombing of Hiroshima: ‘The world will note that the first atomic bomb was dropped on Hiroshima, a military base. That was because we wished in this first attack to avoid, in so far as was possible, the killing of civilians.’ Truman made this statement immediately following the instant killing of 70,000 to 80,000 civilian residents of Hiroshima. By the end of 1945, 140,000 residents of that city died from the bomb. In the end, more than 100 Japanese cities were destroyed by firebombing, and two by atomic bombing, causing one million casualties, including more than half a million deaths, the majority being civilians, particularly women and children.” — From Yuki Tanaka’s introduction to the invaluable volume she co-edited with the historian, Marilyn B. Young (25 April 1937 – 19 February 2017), Bombing Civilians: A Twentieth-Century History (The New Press, 2009).
Consider too the bombings in Japan that preceded Hiroshima and Nagasaki: in in the “final six months of the war, the United States threw the full weight of its airpower into campaigns to burn whole Japanese cities to the ground and terrorize, incapacitate, and kill their largely defenseless residents in an effort to force surrender.” Discussion of this in no way precludes ignoring the fact that Japan was earlier (1932-1945) involved in horrific bombings of Shanghai, Nanjing, Chongqing, and other cities, “testing chemical weapons in Ningbo and throughout Zhejiang and Hunan provinces.”
The goal of U.S. bombing assault on Japanese cities, Mark Selden explains, is found in the words of the officers responsible for the U.S. Strategic Bombing Survey (SBS): “either to bring overwhelming pressure on her to surrender, or to reduce her capability of resisting invasion ... [by destroying] the basic economic and social fabric of the country.” The description of the use of firebombing and napalm on Tokyo (in an area estimated to be 84.7 percent residential) on March 9-10 is chilling: “Whipped by fierce winds, flames generated by the bombs leaped across a fifteen-square-mile area of Tokyo, generating immense firestorms that killed scores of thousands of residents. [....] With an average of 103,000 inhabitants per square mile and peak levels as high as 135,000 per square mile, the highest density of any industrial city in the world, and with firefighting measures ludicrously inadequate to the task, 15.8 square miles of Tokyo were destroyed. An estimated 1.5 million people lived in the burned-out areas. Given a near total inability to fight fires of the magnitude produced by the bombs, it is possible to imagine that the casualties may have been several times higher than the figures presented [100,000-125,000 killed and a roughly equal or higher number wounded] on both sides of the conflict.” [....] Subsequent raids brought the devastated area of Tokyo to more than 56 square miles, provoking the flight of millions of refugees. [....] Overall, bombing strikes destroyed 40 percent of the 66 Japanese cities targeted, with total tonnage dropped on Japan increasing from 13,800 tons in March to 42,700 tons in July. If the bombing of Dresden produced a ripple of public debate in Europe, no discernible wave or repulsion, let alone protest, took place in the United States or Europe in the wake of the far greater destruction of Japanese cities and the slaughter of civilian populations on a scale that had no parallel in the history of bombing.”
Please see Mark Selden’s chapter, “A Forgotten Holocaust: U.S. Bombing Strategy, the Destruction of Japanese Cities, and the American Way of War from the Pacific War to Iraq,” in Yuki Tanaka and Marilyn B. Young, eds., Bombing Civilians: A Twentieth-Century History (New York: The New Press, 2009): 77-96. See too Tsuyoshi Hasegawa’s chapter, “Were the Atomic Bombings of Hiroshima and Nagasaki Justified?”: 97-134.
Perhaps some readers are familiar with Gar Alperovitz’s The Decision to Use the Atomic Bomb (1995) (see to arguments made by John V. Denson). As he wrote on a previous anniversary of the bombing, “Many Japanese historians have long judged the Soviet declaration of war to have been the straw that broke the camel's back, mainly because the Japanese military feared the Red Army more than the loss of another city by aerial bombardment. (They had already shown themselves willing to sacrifice many, many cities to conventional bombing!) An intimately related question is whether the bomb was in any event still necessary to force a surrender before an invasion. Again, most Americans believe the answer obvious as, of course, do many historians. However, a very substantial number also disagree with this view. One of the most respected, Stanford University Professor Barton Bernstein, judges that all things considered it seems ‘probable’ indeed, far more likely than not ‘that Japan would have surrendered before November’ (when the first landing in Japan was scheduled). Many years ago Harvard historian Ernest R. May also concluded that the surrender decision probably resulted from the Russian attack, and that ‘it could not in any event been long in coming.’”
Let us move forward to the American war in Indochina. “Operation Rolling Thunder was the title of a gradual and sustained US 2nd Air Division (later Seventh Air Force), US Navy, and Republic of Vietnam Air Force (VNAF) aerial bombardment campaign conducted against the Democratic Republic of Vietnam (North Vietnam) from 2 March 1965 until 2 November 1968, during the Vietnam War.
The four objectives of the operation (which evolved over time) were to boost the sagging morale of the Saigon regime in the Republic of Vietnam, to persuade North Vietnam to cease its support for the communist insurgency in South Vietnam without actually taking any ground forces into communist North Vietnam, to destroy North Vietnam’s transportation system, industrial base, and air defenses, and to cease the flow of men and material into South Vietnam. [….] The operation became the most intense air/ground battle waged during the Cold War period; indeed, it was the most difficult such campaign fought by the U.S. Air Force since the aerial bombardment of Germany during World War II. Supported by communist allies, North Vietnam fielded a potent mixture of sophisticated air-to-air and ground-to-air weapons that created one of the most effective air defenses ever faced by American military aviators.”
“Operation Linebacker II was a US Seventh Air Force and US Navy Task Force 77 aerial bombing campaign, conducted against targets in the Democratic Republic of Vietnam (North Vietnam) during the final period of US involvement in the Vietnam War. The operation was conducted from 18–29 (or 17-28) December 1972, leading to several of informal names such as ‘The December Raids’ and ‘The Christmas Bombings.’ It saw the largest heavy bomber strikes launched by the US Air Force since the end of World War II. Linebacker II was a resumption of the Operation Linebacker bombings conducted from May to October, with the emphasis of the new campaign shifted to attacks by B-52 Stratofortress bombers rather than tactical fighter aircraft.”
“In twelve days…the American military bludgeoned Hanoi, Haiphong, and other highly developed areas of North Vietnam with the most concentrated aerial bombardment ever used against any human population. Air Force B-52 Stratofortresses plastered densely inhabited areas with their ‘arc-light’ strikes crater-making 2,000-pound bombs in half-mile wide swaths. Together with the smaller F-4 Phantom and F-111 fighter-bombers, they dropped in the last five days alone 100,000 tons of explosives, the equivalent of five early atomic bombs. At the end of twelve days, American planes had dropped on North Vietnam the destructive equivalent of all the bombs dropped on Japan during the entire Second World War.”—From Wikipedia entries
In an interview with the television journalist Marvin Kalb on February 1, 1973, Kissinger defends the December 1972 bombings as essential to the effort to persuade both North and South Vietnam of the desirability and necessity for a peace agreement.
“Throughout World War II, in all sectors, the United States dropped 2 million tons of bombs; for Indochina, the total figure is 8 million tons, with an explosive power equivalent to 640 Hiroshima-size bombs. Three million tons were dropped on Laos, exceeding the total for Germany and Japan by both the U.S. and Great Britain. For nine years, an average on one planeload of bombs fell on Laos every eight minutes [from 1965 to 1973 — about one ton for every Laotian man, woman and child]. In addition, 150,000 acres of forest were destroyed through the chemical warfare known as defoliation. For South Vietnam, the figure is 19 million gallons of defoliant dropped on an area comprising 20 percent of South Vietnam—some 6 million acres. In an even briefer period, between 1969 and 1973, 513,129 tons of bombs were dropped in Cambodia, largely by B-52s, of which 257,465 tons fell in the last six months of the war (as compared to 160, 771 tons on Japan from 1942-1945) [In Cambodia, between October 4, 1965 and August 15, 1973, the U.S. dropped 2,756,941tons in 230,516 sorties on 113,716 sites]. The estimated toll of the dead, the majority civilian, is equally difficult to absorb: … 2 to 4 million in Vietnam.”— Marlyn B. Young, “Bombing Civilians from the Twentieth to the Twenty-First Centuries,” in Yuki Tanaka and Marilyn B. Young, eds., Bombing Civilians: A Twentieth-Century History (New York: The New Press, 2009).
“In Vietnam the majority of U.S. bombing was in the South of the country in the rural areas. (In the North the bombing was largely targeted on urban areas and the population had to decentralize). Much of the U.S. bombing of Indochina was integrated into the Pacification Program, primarily as part of what were called ‘search and destroy missions.’ These missions have been graphically described as ‘typically [beginning] with B-52 saturation bombing of an “objective” area ... [followed by] long range artillery fire ... aerial bombing by smaller, lower flying attack bombers which are armed with half-ton bombs, ... and huge canisters of gelatinous napalm ... Last to arrive and devastate the “objective” from the air are helicopter gunships firing rockets and M-60 machine guns….’ (Committee of Concerned Asian Scholars, 1970: 104; see also Schell, 1967). After these bombing attacks, any people left alive were either forced to move to the cities or were herded into ‘strategic hamlets,’ set up and financed by the United States, surrounded by high barbed wire fences to separate the ‘ocean’ from the ‘fish.’ Between 1965 and 1970, 5,000 hamlets, with an estimated population of four million people, were destroyed.
The use of chemicals (such as CS gas and napalm) and herbicides (such as Agents Orange and Blue) against the people, forests, and crops was also part of this overall Pacification Program of destroying the capacity for people to support the guerilla fighters, rather than primarily, as the Army generally claimed, to destroy the opposing military forces or to destroy their forest cover. According to the Committee of Concerned Asian Scholars (1970: 112), ‘The army denies that herbicides were used in populated areas. But there is ample documentary evidence to the contrary, even from government sources.’
This was the policy throughout Indochina. In Laos, from 1965 to 1973, the U.S. Air Force dropped over 2,000,000 tons of bombs. Most of the victims were civilians. In Cambodia in March 1969, the U.S. military increased to ‘intensive’ the secret bombing program: 3,630 B-52 bombing raids annihilated the country (Kiernan, 1989; Shawcross, 1987: 28).
The U.S. bombing in Indochina was the ‘heaviest aerial bombardment in history’ (Committee of Concerned Asian Scholars, 1970: 97).” — From Truda Gray and Brian Martin, “The American War in Indochina: Injustice and Outrage,” Revista de Paz y Conflictos, No. 1, 2008.
In addition to my bibliography on the American War in Indochina, see in particular:
More recent history of course provides us with yet more examples of the international crime of aggression (crimes against peace) and war crimes committed by the U.S. For instance, the U.S. helped orchestrate and took part in NATO’s bombing of the Yugoslav Federal Republic, in particular, “seventy-eight days of uninterrupted bombing raids on Serbia, Vojvodina, and Kosovo in 1999,” as part of its campaign of “humanitarian intervention.” Before and after this date we have the cases of Iraq and Afghanistan, which likewise raise questions of international criminal law and justice.
All of this, while incomplete and bereft of the nasty details, is a disturbing reminder that international criminal law can often be reduced to “victor’s justice,” as the superpowers studiously and shamelessly avoid criminal responsibility and blame for aggression and war crimes they self-righteously and hypocritically condemn when believed to be (often correctly) committed by others. Those of us on the Left cannot but help to also point out the war crimes repeatedly (among other violations of international law, criminal and otherwise) committed by Israel in its wars on Gaza, for which the U.S. is at least complicit.
In bringing these matters to your attention, I do not intend in any way to diminish the crime of aggression and war crimes one can accuse Russia of committing today in Ukraine (or elsewhere for that matter). It is rather an attempt to enable us to understand how and why it is that the behavior of the U.S. around the world since WW II has contributed to an international political and legal climate in which such example(s) as outlined above, as well as those set by other powerful nation-states (Russia, China, the UK …), has led to something like a contagion effect on other countries, thereby forming a moral vacuum and corresponding ethos of illegality and political chaos or instability in the international community (such as it is), which at the very least increases the risk of more and wider wars if not the use of nuclear weapons (‘do as I say, not as I do,’ can no longer serve as the regnant norm). It leads to derision, cynicism, and even dismissal of the necessity and value of international criminal law and justice. This, and thus not Russia alone, has brought us to the horrific situation that prevails in Ukraine. Manichaean morality in the world of nation-states is entirely self-defeating and intrinsically dangerous in whatever form it assumes among the powers-that-be, especially the U.S. Again, please do not misunderstand; we need to do all we can to bring the war in Ukraine to an end. At the very least, we need to make sure that any “success” on that front that Russia may claim, will not set the stage for yet more Russian imperialist aggression. And equally important, we need assurance from our political and military leaders (perhaps preceded by a confession of sins) that henceforth they will hold themselves to the very same moral and legal norms and standards they assign to others, indeed, that they will act domestically and abroad in a manner worthy of emulation. Perhaps this is a utopian wish, although I prefer to believe it is a realistic hope, for if I thought otherwise, I would conclude that we are all, at least figuratively, damned.
Relevant Bibliographies
Posted at 03:59 PM in Patrick S. O'Donnell | Permalink | Comments (0)
Minoan “Fresco of the Dolphins” on the island of Knossos.
“It is important for all of us to try hard to understand what scientists have been discovering. Animals have long been seen as mere property, as ‘brute beasts.’ Now a revolution in knowledge is revealing the enormous richness and cognitive complexity of animal lives, which prominently include intricate social groups, emotional responses, and even cultural learning. We share this fragile planet with other sentient animals, whose efforts to live and flourish are thwarted in countless ways by human negligence and obtuseness. This gives us a collective responsibility to do something to make our ubiquitous domination more benign, less brutal—perhaps even more just.
But to think clearly about our responsibility, we need to understand these animals as accurately as we can: what they are striving for, what capacities and responses they have as they try to flourish. Knowledge will help us to think better about the ethical questions before us and, especially, to develop a good theoretical orientation toward animal lives, which can direct law and policy well, rather than, as in the past, crudely and obtusely.” — Martha C. Nussbaum, “What We Owe Our Fellow Animals,” New York Review of Books, March 10, 2022: 34-36
* * *
Do animals experience pain? Some philosophers have argued that they in fact do not!
Sometimes philosophers lack what we might call common sense (which, admittedly, is today increasingly uncommon), or they are too clever for their own good, or they simply reason poorly. Being a non-philosopher, I confess to occasionally enjoying pointing out the folly or blind-spots or inexcusable ignorance among (at least some) professional philosophers. Today I will provide you with one example along these lines, involving two different philosophers who happen to share the same first name: Peter, i.e., Peter Harrison and Peter Carruthers. I was familiar with the work of Carruthers but just learned of a similar conclusion made by Harrison. I came across their arguments (which strike me as bizarre to the point of implausibility) in the field of “animal ethics” in Gary Steiner’s “marvelously clear and accessible book” (Gary I. Francione—two more shared first names), Animals and the Moral Community: Mental Life, Moral Status, and Kinship (Columbia University Press, 2008). Both philosophers have argued that nonhuman animals cannot experience pain, although their specific reasons and thus arguments differ, with Harrison relying on ideas derived from Descartes (hence ‘only beings with rational minds [involving beliefs and knowledge] can experience pain’). I will concentrate on the short argument made by Carruthers.
For our purposes, I more or less endorse the views of Steiner which I will enumerate before proceeding to Carruthers’ argument.
(i) We should not draw “a sharp distinction between human beings and non-human animals, Evolutionary continuity and physiological similarity make any such distinction naïve at best.”
(ii) We should also avoid two extremes: (a) First, that animals are capable of employing conceptual abstraction and [what philosophers term] propositional attitudes, which translates into an ability to form “complex intentions.” (b) The other “extreme” view is that animal cognition is simply about “information processing.” These extremes have canalized into two very different conclusions in the form of “animal cognition as a strict either-or:” either animals possess the full apparatus of intentionality or they lack all states of subjective awareness.
I agree with Steiner what we need is a “theory of animal minds that dispenses with appeals to formal intentionality while seeking to acknowledge the richness and sophistication of the inner lives of animals.” The first clause amounts to denying our fellow non-human creatures “the conceptual and predicative abilities that make possible complex thought, self-reflective awareness, and moral agency” (cf. the arguments made by Mark Rowlands for animals to be conceived as ‘moral subjects’ rather than moral agents). Steiner argues (relying in part on arguments made by Ruth Millikan) that animals have perceptual representations that are related to their goals through means of complex associations, which for Millikan means that animals “are confined in their perceptual environment in a way that human beings are not.” Unlike Millikan, however, the representations Steiner attributes to animals cannot take predicative form, being entirely perceptual rather than intentional in nature (we will not attempt her to explain precisely what that means and entails).
Back to Carruthers: While Professor Carruthers argues that “many animals—at least mammals,” have beliefs and desires and perceptual or sensual consciousness or awareness, this sort of immediate awareness is different from having “conscious mental states and experiences,” which are “two different matters.” A concise summary is provided by Steiner:
“Although animals can be conscious in the sense of being ‘aware of the world around them and of the states of their own bodies,’ they do not have conscious experiences, inasmuch as animals are incapable of being conscious that they are in the states that they are in. How animals can have beliefs and desires without being able to be conscious that … Carruthers never explains. He simply advances the view that animals have immediate states of awareness, but that these states do not count as conscious experience since animals cannot think about them. This leads Carruthers to the same conclusion as Harrison, namely, that animals cannot experience pain. [!!!] For pain to be conscious, it must be available to conscious thinking [what is usually termed ‘higher-order’ or ‘second-order’ thinking or reflection or ‘meta-cognition’]. But ‘if animals are incapable of thinking about their own acts of thinking, their pains must all be non-conscious ones.’ And because ‘there is nothing that it is like to be the subject of non-conscious pain,’ animal pain does not merit our sympathy.”
Recommended Reading
See too these bibliographies:
Posted at 05:17 AM in Patrick S. O'Donnell | Permalink | Comments (0)
Fortunately, there is all manner of passionate symbolic and concrete expressions of global solidarity with the people of Ukraine in their fight against Putin’s (Russia’s) war of aggression on their sovereign nation-state. Today I would like to share some news sources and international law and other blogs I’ve found helpful by way of making some legal and political sense (both encouraging and discouraging) of what is happening, while putting aside for the moment all the historical events or variables that one might argue are among the long- or medium-term or precipitating causes and larger contextual backdrop that likely played a formative role in Putin’s decision to invade Ukraine (given Putin’s sociopathic authoritarian dispositions and behavior, we should not, in any case presume these will adequately explain his motivations).
Perhaps the foremost international law blog is now EJIL: Talk! (blog of the European Journal of International Law). One of the blog’s editors, Marko Milanovic, is especially helpful. See, for example, his posts, “Recognition,” and “What is Russia’s Legal Justification for Using Force against Ukraine?” Since these were first posted in the third week of February, there have been numerous contributions by first-rate scholars on a variety of legal and political topics concerning Russia’s invasion of Ukraine. In addition to the reporting and op-eds at the Los Angeles Times, The New York Times, The Washington Post, The Guardian, and Al Jazeera online, other helpful sites include Opinio Juris, Lawfare, Just Security, and Eurasianet. I occasionally check in at both CNN and MSNBC for the latest news. I’ve not kept track of all the insightful pieces I’ve read from public intellectuals and experts on “international security” over the last month because I’ve been preoccupied with my own research projects but if I can carve out the time, I’ll try to post some of these in the future. I leave your with two pieces from Slavoj Žižek, who is at his best when he is not trying to impress us with his Lacanian and philosophical bona fides: “What Does Defending Europe Mean?,” and something he penned back in 2014, “Why both the left and right have got it wrong on Ukraine.”
Addenda: A FB friend shared this article by Žižek as well: “What Will Grow Out of a Pocket Full of Sunflower Seeds?” that just appeared in The Philosophical Salon (part of an LARB project). And while (for better and worse) I rarely listen to podcasts, this one is very good and thus highly recommended: EJIL: The Podcast! Episode 14—“From Russia With War.”
Relevant Bibliographies
Posted at 12:07 PM in Patrick S. O'Donnell | Permalink | Comments (0)
I have become quite intrigued by the portrait of animals in children’s literature, but especially the use of animal stories to convey lessons in morality and ethics or the learning of specific virtues or moral emotions (the latter often include a cognitive component or rational element). I don’t think that this is the only or even primary way children are socialized into morality and later ethical life, indeed, I suspect imitation and learning by example (in families and communities) are likely, at least in the beginning, to be decisive; in which case such stories may play a unique role in making the child at once more reflective and imaginative, awakening, for instance, what we can call moral emotions like guilt and shame, sympathy, compassion, empathy, what Mark Rowlands terms “moral emotions of care and concern,” as well as shaping or constraining dispositionally “dangerous” or fraught emotions like anger, rage, envy, and spite. Such stories may simply yet no less importantly reinforce or fill out other means of moral awakening, imagination, and socialization. In a moral psychological and perhaps epistemic sense, they can be the seeds of personal identity in conjunction with moments of incipient self-examination. These stories may exist as one side of a model or form of mutual literary reinforcement that includes the learning of, say, popular proverbs, adages, and aphorisms (maxims, however, such as those written by the French moralists, are more suitable to those well past the ‘age of reason’ in a developmental sense), some of which may be found in the tales themselves.
In particular, and speaking for myself, Aesop’s Fables, the Jātaka tales in Buddhism (some of these tales pre-date Buddhism in Indic culture), and the Indic Pañcatantra stand apart. The Jātaka tales involve previous (re-)birth stories of the Buddha, these births being of both animal and eventually human form. I was provoked to consider such stories afresh, oddly enough, not only because I cannot recall learning (neither hearing nor reading) them as a child (indeed, I learned of them only later as a parent when, along with my spouse, reading them to our children!), but as a result of my recent study of philosophical arguments (and scientific evidence) explaining how animals behave or act morally, or at least express moral attitudes or emotions (e.g., sympathy, compassion, care, concern or solicitude, and grief). Some people would dismiss such philosophical arguments (even if they enjoy and appreciate the morals or ‘truths’ of the aforementioned bodies of literature) as mired in illicit personification or anthropomorphism and yet increasing scientific evidence testifies otherwise, in addition to having been endorsed by philosophers who’ve found deflationary behaviorist and decidedly non-moral explanations unpersuasive even if still (hypothetically) plausible. One reason arguments here cannot be judged conclusive in the sense of eliminating once-and-for-all counter-arguments is that the hypotheses and theories of the principal parties rest on different presuppositions, assumptions, and presumptions, thus they may exhibit (more or less) internal consistency and coherence which is liable to appear weaker in face of counter-arguments, that is, from the outside-looking-in as it were.
These classical stories (fables and tales), not surprisingly, are frequently described as involving “metaphors of anthropomorphized animals with human virtues and vices.” That may, sometimes, and strictly speaking, be true, yet the qualification is necessary because it now appears that different species of animals display, as “moral subjects” (distinguished from ‘moral patients’ on the one hand, and ‘moral agents’ on the other) what Rowlands calls “moral emotions” in his brilliant and pathbreaking book, Can Animals Be Moral? (Oxford University Press, 2012).
The three summaries below are edited versions of the respective Wikipedia entries. The last entry on the Pañcatantra contains a helpful discussion of the possible if not probable relations and influences that exist between these three bodies of “classical” fables and tales.
“The Jātaka tales are a voluminous body of literature native to India concerning the previous births of Gautama Buddha in both human and animal form. The future Buddha may appear as a king, an outcast, a god, an elephant—but, in whatever form, he exhibits some virtue that the tale thereby inculcates. Often, Jātaka tales include an extensive cast of characters who interact and get into various kinds of trouble—whereupon the Buddha character intervenes to resolve all the problems and bring about a happy ending.
In Theravada Buddhism, the Jātakas are a textual division of the Pāli Canon, included in the Khuddaka Nikaya of the Sutta Pitaka. The term Jātaka may also refer to a traditional commentary on this book. The tales are dated between 300 BC and 400 AD. Mahāsāṃghika Caitika sects from the Āndhra region took the Jātakas as canonical literature and are known to have rejected some of the Theravāda Jātakas which dated past the time of King Ashoka. The Caitikas claimed that their own Jātakas represented the original collection before the Buddhist tradition split into various lineages.
According to A. K. Warder, the Jātakas are the precursors to the various legendary biographies of the Buddha, which were composed at later dates. Although many Jātakas were written from an early period, which describe previous lives of the Buddha, very little biographical material about Gautama’s own life has been recorded.
The Jātaka-Mālā of Arya Śura in Sanskrit gives 34 Jātaka stories. At the Ajanta Caves, Jātaka scenes are inscribed with quotes from Arya Shura, with script datable to the sixth century. It had already been translated into Chinese in 434 CE. Borobudur contains depictions of all 34 Jatakas from Jataka Mala.”
“Aesop’s Fables, or the Aesopica, is a collection of fables credited to Aesop, a slave and storyteller believed to have lived in ancient Greece between 620 and 564 BCE. Of diverse origins, the stories associated with his name have descended to modern times through a number of sources and continue to be reinterpreted in different verbal registers and in popular as well as artistic media.
The fables originally belonged to the oral tradition and were not collected for some three centuries after Aesop’s death. By that time, a variety of other stories, jokes and proverbs were being ascribed to him, although some of that material was from sources earlier than him or came from beyond the Greek cultural sphere. The process of inclusion has continued until the present, with some of the fables unrecorded before the Late Middle Ages and others arriving from outside Europe. The process is continuous and new stories are still being added to the Aesop corpus, even when they are demonstrably more recent work and sometimes from known authors.
Manuscripts in Latin and Greek were important avenues of transmission, although poetical treatments in European vernaculars eventually formed another. On the arrival of printing, collections of Aesop’s fables were among the earliest books in a variety of languages. Through the means of later collections, and translations or adaptations of them, Aesop's reputation as a fabulist was transmitted throughout the world.
Initially the fables were addressed to adults and covered religious, social and political themes. They were also put to use as ethical guides and from the Renaissance onwards were particularly used for the education of children. Their ethical dimension was reinforced in the adult world through depiction in sculpture, painting and other illustrative means, as well as adaptation to drama and song. In addition, there have been reinterpretations of the meaning of fables and changes in emphasis over time. Apollonius of Tyana, a 1st century CE philosopher, is recorded as having said about Aesop:
‘like those who dine well off the plainest dishes, he made use of humble incidents to teach great truths, and after serving up a story he adds to it the advice to do a thing or not to do it. Then, too, he was really more attached to truth than the poets are; for the latter do violence to their own stories in order to make them probable; but he by announcing a story which everyone knows not to be true, told the truth by the very fact that he did not claim to be relating real events.’
Earlier still, the Greek historian Herodotus mentioned in passing that ‘Aesop the fable writer’ was a slave who lived in Ancient Greece during the 5th century BCE. Among references in other writers, Aristophanes, in his comedy The Wasps, represented the protagonist Philocleon as having learnt the ‘absurdities’ of Aesop from conversation at banquets; Plato wrote in Phaedo that Socrates whiled away his time in prison turning some of Aesop’s fables ‘which he knew’ into verses. Nonetheless, for two main reasons, because numerous morals within Aesop’s attributed fables appear to contradict each other, and because ancient accounts of Aesop’s life contradict each other, the modern view is that Aesop was not the originator of all those fables attributed to him. Instead, any fable tended to be ascribed to the name of Aesop if there was no known alternative literary source.
In Classical times there were various theorists who tried to differentiate these fables from other kinds of narration. They had to be short and unaffected; in addition, they are fictitious, useful to life and true to nature. In them could be found talking animals and plants, although humans interacting only with humans figure in a few. Typically they might begin with a contextual introduction, followed by the story, often with the moral underlined at the end. Setting the context was often necessary as a guide to the story’s interpretation, as in the case of the political meaning of The Frogs Who Desired a King and The Frogs and the Sun.
Sometimes the titles given later to the fables have become proverbial, as in the case of killing the Goose that Laid the Golden Eggs or the Town Mouse and the Country Mouse. In fact some fables, such as The Young Man and the Swallow, appear to have been invented as illustrations of already existing proverbs. One theorist, indeed, went so far as to define fables as extended proverbs. In this they have an aetiological function, the explaining of origins such as, in another context, why the ant is a mean, thieving creature or how the tortoise got its shell. Other fables, also verging on this function, are outright jokes, as in the case of The Old Woman and the Doctor, aimed at greedy practitioners of medicine.
The apparent contradictions between fables already mentioned and alternative versions of much the same fable, as in the case of The Woodcutter and the Trees, are best explained by the ascription to Aesop of all examples of the genre. Some are demonstrably of West Asian origin, others have analogues further to the East. Modern scholarship reveals fables and proverbs of Aesopic form existing in both ancient Sumer and Akkad, as early as the third millennium BCE. Aesop’s fables and stories from Indian traditions, for instance the Buddhist Jātaka tales and the Hindu Pañcatantra, share about a dozen tales in common, although often widely differing in detail. There is some debate over whether the Greeks learned these fables from Indian storytellers or the other way, or if the influences were mutual.
Loeb editor Ben E. Perry took the extreme position in his book Babrius and Phaedrus (1965) that ‘in the entire Greek tradition there is not, so far as I can see, a single fable that can be said to come either directly or indirectly from an Indian source; but many fables or fable-motifs that first appear in Greek or Near Eastern literature are found later in the Panchatantra and other Indian story-books, including the Buddhist Jatakas.’ Although Aesop and the Buddha were near contemporaries, the stories of neither were recorded in writing until some centuries after their death. Few disinterested scholars would now be prepared to make so absolute a stand as Perry about their origin in view of the conflicting and still emerging evidence.
When and how the fables arrived in and travelled from ancient Greece remains uncertain. Some cannot be dated any earlier than Babrius and Phaedrus, several centuries after Aesop, and yet others even later. The earliest mentioned collection was by Demetrius of Phalerum, an Athenian orator and statesman of the 4th century BCE, who compiled the fables into a set of ten books for the use of orators. A follower of Aristotle, he simply catalogued all the fables that earlier Greek writers had used in isolation as exempla, putting them into prose. At least it was evidence of what was attributed to Aesop by others; but this may have included any ascription to him from the oral tradition in the way of animal fables, fictitious anecdotes, etiological or satirical myths, possibly even any proverb or joke, that these writers transmitted. It is more a proof of the power of Aesop’s name to attract such stories to it than evidence of his actual authorship. In any case, although the work of Demetrius was mentioned frequently for the next twelve centuries, and was considered the official Aesop, no copy now survives. Present day collections evolved from the later Greek version of Babrius, of which there now exists an incomplete manuscript of some 160 fables in choliambic verse. Current opinion is that he lived in the 1st century CE. The version of 55 fables in choliambic tetrameters by the 9th century Ignatius the Deacon is also worth mentioning for its early inclusion of tales from Oriental sources.
Further light is thrown on the entry of Oriental stories into the Aesopic canon by their appearance in Jewish sources such as the Talmud and in Midrashic literature. [….] Where similar fables exist in Greece, India, and in the Talmud, the Talmudic form approaches more nearly the Indian. Thus, the fable ‘The Wolf and the Crane’ is told in India of a lion and another bird. When Joshua ben Hananiah told that fable to the Jews, to prevent their rebelling against Rome and once more putting their heads into the lion’s jaws (Gen. R. lxiv.), he shows familiarity with some form derived from India.
The first extensive translation of Aesop into Latin iambic trimeters was performed by Phaedrus, a freedman of Augustus in the 1st century CE, although at least one fable had already been translated by the poet Ennius two centuries before, and others are referred to in the work of Horace. The rhetorician Aphthonius of Antioch wrote a technical treatise on, and converted into Latin prose, some forty of these fables in 315. It is notable as illustrating contemporary and later usage of fables in rhetorical practice. Teachers of philosophy and rhetoric often set the fables of Aesop as an exercise for their scholars, inviting them not only to discuss the moral of the tale, but also to practise style and the rules of grammar by making new versions of their own. A little later the poet Ausonius handed down some of these fables in verse, which the writer Julianus Titianus translated into prose, and in the early 5th century Avianus put 42 of these fables into Latin elegiacs.
The largest, oldest known and most influential of the prose versions of Phaedrus bears the name of an otherwise unknown fabulist named Romulus. It contains 83 fables, dates from the 10th century and seems to have been based on an earlier prose version which, under the name of ‘Aesop’ and addressed to one Rufus, may have been written in the Carolingian period or even earlier. The collection became the source from which, during the second half of the Middle Ages, almost all the collections of Latin fables in prose and verse were wholly or partially drawn. A version of the first three books of Romulus in elegiac verse, possibly made around the 12th century, was one of the most highly influential texts in medieval Europe. Referred to variously (among other titles) as the verse Romulus or elegiac Romulus, and ascribed to Gualterus Anglicus, it was a common Latin teaching text and was popular well into the Renaissance. Another version of Romulus in Latin elegiacs was made by Alexander Neckam, born at St Albans in 1157.
Interpretive ‘translations’ of the elegiac Romulus were very common in Europe in the Middle Ages. Among the earliest was one in the 11th century by Ademar of Chabannes, which includes some new material. This was followed by a prose collection of parables by the Cistercian preacher Odo of Cheriton around 1200 where the fables (many of which are not Aesopic) are given a strong medieval and clerical tinge. This interpretive tendency, and the inclusion of yet more non-Aesopic material, was to grow as versions in the various European vernaculars began to appear in the following centuries.
With the revival of literary Latin during the Renaissance, authors began compiling collections of fables in which those traditionally by Aesop and those from other sources appeared side by side. One of the earliest was by Lorenzo Bevilaqua, also known as Laurentius Abstemius, who wrote 197 fables, the first hundred of which were published as Hecatomythium in 1495. Little by Aesop was included. At the most, some traditional fables are adapted and reinterpreted: The Lion and the Mouse is continued and given a new ending (fable 52); The Oak and the Reed becomes ‘The Elm and the Willow’ (53); The Ant and the Grasshopper is adapted as ‘The Gnat and the Bee’ (94) with the difference that the gnat offers to teach music to the bee’s children. There are also Mediaeval tales such as The Mice in Council (195) and stories created to support popular proverbs such as ‘Still Waters Run Deep’ (5) and ‘A woman, an ass and a walnut tree’ (65), where the latter refers back to Aesop’s fable of The Walnut Tree. Most of the fables in Hecatomythium were later translated in the second half of Roger L’Estrange’s Fables of Aesop and other eminent mythologists (1692); some also appeared among the 102 in H. Clarke’s Latin reader, Select fables of Aesop: with an English translation (1787), of which there were both English and American editions. [….]
Until the 18th century the fables were largely put to adult use by teachers, preachers, speech-makers and moralists. It was the philosopher John Locke who first seems to have advocated targeting children as a special audience in Some Thoughts Concerning Education (1693). Aesop's fables, in his opinion are
‘apt to delight and entertain a child . . . yet afford useful reflection to a grown man. And if his memory retain them all his life after, he will not repent to find them there, amongst his manly thoughts and serious business. If his Aesop has pictures in it, it will entertain him much better, and encourage him to read when it carries the increase of knowledge with it For such visible objects children hear talked of in vain, and without any satisfaction, whilst they have no ideas of them; those ideas being not to be had from sounds, but from the things themselves, or their pictures.’
That young people are a special target for the fables was not a particularly new idea and a number of ingenious schemes for catering to that audience had already been put into practice in Europe. The Centum Fabulae of Gabriele Faerno was commissioned by Pope Pius IV in the 16th century ‘so that children might learn, at the same time and from the same book, both moral and linguistic purity.’ When King Louis XIV of France wanted to instruct his six-year-old son, he incorporated the series of hydraulic statues representing 38 chosen fables in the labyrinth of Versailles in the 1670s. In this he had been advised by Charles Perrault, who was later to translate Faerno’s widely published Latin poems into French verse and so bring them to a wider audience. Then in the 1730s appeared the eight volumes of Nouvelles Poésies Spirituelles et Morales sur les plus beaux airs, the first six of which incorporated a section of fables specifically aimed at children. In this the fables of La Fontaine were rewritten to fit popular airs of the day and arranged for simple performance. The preface to this work comments that ‘we consider ourselves happy if, in giving them an attraction to useful lessons which are suited to their age, we have given them an aversion to the profane songs which are often put into their mouths and which only serve to corrupt their innocence.’ The work was popular and reprinted into the following century. [….]
The Pañcatantra (Sanskrit: पञ्चतन्त्र, ‘Five Treatise’ is an ancient Indian collection of interrelated animal fables in Sanskrit verse and prose, arranged within a frame story. The surviving work is dated to about 200 BCE, but the fables are likely much more ancient. The text’s author is unknown, but it has been attributed to Vishnu Sharma in some recensions and Vasubhaga in others, both of which may be fictitious pen names. It is likely a Hindu text, and based on older oral traditions with ’animal fables that are as old as we are able to imagine.’
It is ‘certainly the most frequently translated literary product of India,’ and these stories are among the most widely known in the world. It goes by many names in many cultures. There is a version of Pañcatantra in nearly every major language of India, and in addition there are 200 versions of the text in more than 50 languages around the world. One version reached Europe in the 11th century. To quote Edgerton (1924):
‘… before 1600 it existed in Greek, Latin, Spanish, Italian, German, English, Old Slavonic, Czech, and perhaps other Slavonic languages. Its range has extended from Java to Iceland.... [In India,] it has been worked over and over again, expanded, abstracted, turned into verse, retold in prose, translated into medieval and modern vernaculars, and retranslated into Sanskrit. And most of the stories contained in it have “gone down” into the folklore of the story-loving Hindus, whence they reappear in the collections of oral tales gathered by modern students of folk-stories.’
The earliest known translation into a non-Indian language is in Middle Persian (Pahlavi, 550 CE) by Burzoe. This became the basis for a Syriac translation as Kalilag and Damnag and a translation into Arabic in 750 CE by Persian scholar Abdullah Ibn al-Muqaffa as Kalīlah wa Dimnah. A New Persian version by Rudaki, from the 3rd century Hijri, became known as Kalīleh o Demneh. Rendered in prose by Abu’l-Ma’ali Nasrallah Monshi in 1143 CE, this was the basis of Kashefi’s 15th century Anvār-i Suhaylī (The Lights of Canopus), which in turn was translated into Humayun-namah in Turkish. The book is also known as The Fables of Bidpai (or Pilpai in various European languages, Vidyapati in Sanskrit) or The Morall Philosophie of Doni (English, 1570). Most European versions of the text are derivative works of the 12th century Hebrew version of Pañcatantra by Rabbi Joel. In Germany, its translation in 1480 by Anton von Pforr [de] has been widely read. Several versions of the text are also found in Indonesia, where it is titled as Tantri Kamandaka, Tantravakya or Candapingala and consists of 360 fables. In Laos, a version is called Nandaka-prakarana, while in Thailand it has been referred to as Nang Tantrai.
The prelude section of the Pañcatantra identifies an octogenarian Brahmin named Vishnusharma (Viṣṇuśarman) as its author. He is stated to be teaching the principles of good government to three princes of Amarasakti. It is unclear, states Patrick Olivelle, a professor of Sanskrit and Indian religions, if Vishnusharma was a real person or himself a literary invention. Some South Indian recensions of the text, as well as Southeast Asian versions of Pañcatantra attribute the text to Vasubhaga, states Olivelle. Based on the content and mention of the same name in other texts dated to ancient and medieval era centuries, most scholars agree that Vishnusharma is a fictitious name. Olivelle and other scholars state that regardless of who the author was, it is likely ‘the author was a Hindu, and not a Buddhist, nor Jain,’ but it is unlikely that the author was a devotee of Hindu god Vishnu because the text neither expresses any sentiments against other Hindu deities such as Shiva, Indra and others, nor does it avoid invoking them with reverence.
Various locations where the text was composed have been proposed but this has been controversial. Some of the proposed locations include Kashmir, Southwestern or South India. The text’s original language was likely Sanskrit. Though the text is now known as Pañcatantra, the title found in old manuscript versions varies regionally, and includes names such as Tantrakhyayika, Panchakhyanaka, Panchakhyana and Tantropakhyana. The suffix akhyayika and akhyanaka mean ‘little story’ or ‘little story book’ in Sanskrit.
The text was translated into Pahlavi in 550 CE, which forms the latest limit of the text’s existence. The earliest limit is uncertain. It quotes identical verses from Arthasastra, which is broadly accepted to have been completed by the early centuries of the common era. According to Olivelle, ‘the current scholarly consensus places the Pañcatantra around 300 CE, although we should remind ourselves that this is only an educated guess.’ The text quotes from older genre of Indian literature, and legends with anthropomorphic animals are found in more ancient texts dated to the early centuries of the 1st millennium BCE such as the chapter 4.1 of the Chandogya Upanishad. According to Gillian Adams, Pañcatantra may be a product of the Vedic period, but its age cannot be ascertained with confidence because ‘the original Sanskrit version has been lost.’ [….]
The Pañcatantra is a series of inter-woven fables, many of which deploy metaphors of anthropomorphized animals with human virtues and vices. Its narrative illustrates, for the benefit of three ignorant princes, the central Hindu principles of nīti. While nīti is hard to translate, it roughly means prudent worldly conduct, or ‘the wise conduct of life.’
Apart from a short introduction, it consists of five parts. Each part contains a main story, called the frame story, which in turn contains several embedded stories, as one character narrates a story to another. Often these stories contain further embedded stories. The stories operate like a succession of Russian dolls, one narrative opening within another, sometimes three or four deep. Besides the stories, the characters also quote various epigrammatic verses to make their point. The five books have their own subtitles. [….]
The fables of Pañcatantra are found in numerous world languages. It is also considered partly the origin of European secondary works, such as folk tale motifs found in Boccaccio, La Fontaine and the works of Grimm Brothers. For a while, this had led to the hypothesis that popular worldwide animal-based fables had origins in India and the Middle East. According to Max Muller, ‘Sanskrit literature is very rich in fables and stories; no other literature can vie with it in that respect; nay, it is extremely likely that fables, in particular animal fables, had their principal source in India.’
This mono-causal hypothesis has now been generally discarded in favor of polygenetic hypothesis which states that fable motifs had independent origins in many ancient human cultures, some of which have common roots and some influenced by co-sharing of fables. The shared fables implied morals that appealed to communities separated by large distances and these fables were therefore retained, transmitted over human generations with local variations. However, many post-medieval era authors explicitly credit their inspirations to texts such as ‘Bidpai’ and ‘Pilpay, the Indian sage’ that are known to be based on the Pañcatantra.
According to Niklas Bengtsson, even though India being the exclusive original source of fables is no longer taken seriously, the ancient classic Pañcatantra, ‘which new folklore research continues to illuminate, was certainly the first work ever written down for children, and this in itself means that the Indian influence has been enormous [on world literature], not only on the genres of fables and fairy tales, but on those genres as taken up in children’s literature.’ According to Adams and Bottigheimer, the fables of Pañcatantra are known in at least 38 languages around the world in 112 versions by Jacob’s old estimate, and its relationship with Mesopotamian and Greek fables is hotly debated in part because the original manuscripts of all three ancient texts have not survived. Olivelle states that there are 200 versions of the text in more than 50 languages around the world, in addition to a version in nearly every major language of India.
Scholars have noted the strong similarity between a few of the stories in the Pañcatantra and Aesop’s Fables. Examples are The Ass in the Panther’s Skin and The Ass without Heart and Ears. The Broken Pot is similar to Aesop’s The Milkmaid and Her Pail, The Gold-Giving Snake is similar to Aesop’s The Man and the Serpent and Le Paysan et Dame serpent by Marie de France (Fables). Other well-known stories include The Tortoise and The Geese and The Tiger, the Brahmin and the Jackal. Similar animal fables are found in most cultures of the world, although some folklorists view India as the prime source. [….] The French fabulist Jean de La Fontaine acknowledged his indebtedness to the work in the introduction to his Second Fables: ‘This is a second book of fables that I present to the public.... I have to acknowledge that the greatest part is inspired from Pilpay, an Indian Sage.’ The Pañcatantra is the origin also of several stories in Arabian Nights, Sindbad, and of many Western nursery rhymes and ballads.
In the Indian tradition, The Pañcatantra is a nītiśāstra. Nīti can be roughly translated as ‘the wise conduct of life’ and a śāstra is a technical or scientific treatise; thus it is considered a treatise on political science and human conduct. Its literary sources are thus ‘the expert tradition of political science and the folk and literary traditions of storytelling.’ It draws from the Dharma and Artha śāstras, quoting them extensively. It is also explained that nīti ‘represents an admirable attempt to answer the insistent question how to win the utmost possible joy from life in the world of men’ and that nīti is ‘the harmonious development of the powers of man, a life in which security, prosperity, resolute action, friendship, and good learning are so combined to produce joy.’
The Pañcatantra shares many stories in common with the Buddhist Jātaka tales, purportedly told by the historical Buddha before his death around 400 BCE. As the scholar Patrick Olivelle writes, ‘It is clear that the Buddhists did not invent the stories. [....] It is quite uncertain whether the author of [the Pañcatantra] borrowed his stories from the Jātakas or the Mahābhārata, or whether he was tapping into a common treasury of tales, both oral and literary, of ancient India.’ Many scholars believe the tales were based on earlier oral folk traditions, which were finally written down, although there is no conclusive evidence. In the early 20th century, W. Norman Brown found that many folk tales in India appeared to be borrowed from literary sources and not vice versa. [….]
According to Olivelle, ‘… the current scholarly debate regarding the intent and purpose of the Pañcatantra — whether it supports unscrupulous Machiavellian politics or demands ethical conduct from those holding high office — underscores the rich ambiguity of the text.’ Konrad Meisig states that the Pañcatantra has been incorrectly represented by some as ‘an entertaining textbook for the education of princes in the Machiavellian rules of Arthasastra,’ but instead it is a book for the ‘Little Man’ to develop ‘Niti’ (social ethics, prudent behavior, shrewdness) in their pursuit of Artha, and a work on social satire. According to Joseph Jacobs, ‘... if one thinks of it, the very raison d’être of the Fable is to imply its moral without mentioning it.’
The Pañcatantra, states Patrick Olivelle, [is a wonderful] … collection of delightful stories with pithy proverbs, ageless and practical wisdom; one of its appeal and success is that it is a complex book that ‘does not reduce the complexities of human life, government policy, political strategies, and ethical dilemmas into simple solutions; it can and does speak to different readers at different levels.’ [….]
The Sanskrit version of the Pañcatantra text gives names to the animal characters, but these names are creative with double meanings. The names connote the character observable in nature but also map a human personality that a reader can readily identify. For example, the deer characters are presented as a metaphor for the charming, innocent, peaceful and tranquil personality who is a target for those who seek a prey to exploit, while the crocodile is presented to symbolize dangerous intent hidden beneath a welcoming ambiance (waters of a lotus flower-laden pond). Dozens of different types of wildlife found in India are thus named, and they constitute an array of symbolic characters in the Pañcatantra. Thus, the names of the animals evoke layered meaning that resonates with the reader, and the same story can be read at different levels.
The work has gone through many different versions and translations from the sixth century to the present day. The original Indian version was first translated into a foreign language (Pahlavi) by Borzūya in 570 CE, then into Arabic in 750. This Arabic version was translated into several languages, including Syriac, Greek, Persian, Hebrew and Spanish, and thus became the source of versions in European languages, until the English translation by Charles Wilkins of the Sanskrit Hitopadesha in 1787.
The Pañcatantra approximated its current literary form within the 4th–6th centuries CE, though originally written around 200 BCE. No Sanskrit texts before 1000 CE have survived. Buddhist monks on pilgrimage to India took the influential Sanskrit text (probably both in oral and literary formats) north to Tibet and China and east to South East Asia. This led to versions in all Southeast Asian countries, including Tibetan, Chinese, Mongolian, Javanese and Lao derivatives. [….]
It was the Pañcatantra that served as the basis for the studies of Theodor Benfey, the pioneer in the field of comparative literature. His efforts began to clear up some confusion surrounding the history of the Pañcatantra, culminating in the works of Hertel and Edgerton. Hertel discovered several recensions in India, in particular the oldest available Sanskrit recension, the Tantrakhyayika in Kashmir, and the so-called North Western Family Sanskrit text by the Jain monk Purnabhadra in 1199 CE that blends and rearranges at least three earlier versions. Edgerton undertook a minute study of all texts which seemed ’to provide useful evidence on the lost Sanskrit text to which, it must be assumed, they all go back,’ and believed he had reconstructed the original Sanskrit Pañcatantra; this version is known as the Southern Family text.
Among modern translations, Arthur W. Ryder’s translation (1925), translating prose for prose and verse for rhyming verse, remains popular. In the 1990s two English versions of the Pañcatantra were published, Chandra Rajan’s translation (like Ryder’s, based on Purnabhadra’s recension) by Penguin (1993), and Patrick Olivelle’s translation (based on Edgerton’s reconstruction of the ur-text) by Oxford University Press (1997). Olivelle’s translation was republished in 2006 by the Clay Sanskrit Library. [….]
The novelist Doris Lessing notes in her introduction to Ramsay Wood’s 1980 ‘retelling’ of the first two of the five Pañcatantra books, that
‘… it is safe to say that most people in the West these days will not have heard of it, while they will certainly at the very least have heard of the Upanishads and the Vedas. Until comparatively recently, it was the other way around. Anyone with any claim to a literary education knew that the Fables of Bidpai or the Tales of Kalila and Dimna — these being the most commonly used titles with us — was a great Eastern classic. There were at least twenty English translations in the hundred years before 1888. Pondering on these facts leads to reflection on the fate of books, as chancy and unpredictable as that of people or nations.’”
Posted at 09:07 AM in Patrick S. O'Donnell | Permalink | Comments (0)