Developmental Stages

Developmental Stages: Piaget

Sensorimotor Stage
0-2 years—Child begins to make use of imitation, memory, and thought.
Begins to recognize that objects do not cease to exist when they are hidden.
Moves from reflex action to goal-oriented action.
Preoperational Stage
2-7 years—The preoperational stage occurs between two and six. Language development is one of the hallmarks of this period. Piaget noted that children in this stage do not yet understand concrete logic, cannot mentally manipulate information, and are unable to take the point of view of other people, which he termed egocentrism.
During the preoperational stage, children also become increasingly adept at using symbols, as evidenced by the increase in playing and pretending. For example, a child is able to use an object to represent something else, such as pretending a broom is a horse. Role playing also becomes important during this stage. Children often play the role of “mommy,” “daddy,” doctor,” and many others.
Egocentrism: Piaget used a number of clever techniques to study the mental abilities of children. One of the famous techniques regarding egocentrism involved using a three-dimensional display of a mountain scene. Children were asked to choose a picture that showed the scene they had observed. Most children are able to do this with little difficulty. Next, children are asked to select a picture showing what someone else would have observed looking at the mountain from a different viewpoint. Invariably, children almost always choose the scene showing their own view of the mountain scene. According to Piaget, children experience this difficulty because they are unable to take on another person’s perspective.
Conservation: Another well-known experiment involves demonstrating a child’s understanding of conservation. In one experiment, equal amounts of liquid are poured into identical containers. The liquid in one container is then poured into a different shaped cup, such as a tall, thin cup, or a short and wide cup. Children are then asked which cup holds the most liquid. Despite seeing that the liquid amounts were equal, children almost always choose the cup that appears fuller.
Concrete Operational
7-11 years—During this stage children begin thinking logically about concrete events, but have difficulty understanding abstract or hypothetical concepts.
Logic: Piaget determined that children in the concrete operational stage were fairly good at the use of inductive logic (going from a specific experience to a general principle). On the other hand, children at this age have difficulty using deductive logic (using a general principle to determine the outcome of a specific event.
Reversibility: One of the most important developments in this stage is an understanding of reversibility, or awareness that actions can be reversed. An example of this is being able to reverse the order of relationships between mental categories. For example, a child might be able to recognize that his or her dog is a Labrador, that a Labrador is a dog, and that a dog is an animal.
Formal Operational
11-15 years—The formal operational stage begins at age twelve and lasts into adulthood. During this time, people develop the ability to think about abstract concepts. Skills such as logical thought, deductive reasoning, and systematic planning also emerge during this state.
Logic: Piaget believed that deductive logic becomes important during this stage. Deductive logic involves hypothetical situations and is often required in science and mathematics.
Abstract Thought: While children tend to think very concretely and specifically in earlier stages, the ability to think about abstract concepts emerges during this stage. Instead of relying solely on previous experiences, children begin to consider possible outcomes and consequences of their actions.
Problem Solving: In earlier stages, children use trial-and-error to solve problems. At the formal operational stage children approach problems in a logical and methodical manner.



Level 1 Preconventional Morality

Stage 1 Obedience and Punishment Orientation
Kohlberg’s stage 1 is similar to Piaget’s first stage of moral thought. The child assumes that powerful authorities hand down a fixed set of rules which he or she must unquestioningly obey. To the Heinz dilemma, the child typically says that Heinz was wrong to steal the drug because “It’s against the law,” or “It’s bad to steal,” as if this were all there were to it. When asked to elaborate, the child usually responds in terms of the consequences involved, explaining that stealing is bad “because you’ll get punished” (Kohlberg, 1958b).
Although the vast majority of children at stage 1 oppose Heinz’s theft, it is still possible for a child to support the action and still employ stage 1 reasoning. For example, a child might say, “Heinz can steal it because he asked first and it’s not like he stole something big; he won’t get punished” (see Rest, 1973). Even though the child agrees with Heinz’s action, the reasoning is still stage 1; the concern is with what authorities permit and punish.
Kohlberg calls stage 1 thinking “preconventional” because children do not yet speak as members of society. Instead, they see morality as something external to themselves, as that which the big people say they must do.
Stage 2 Individualism and Exchange
At this stage children recognize that there is not just one right view that is handed down by the authorities. Different individuals have different viewpoints. “Heinz,” they might point out, “might think it’s right to take the drug, the druggist would not.” Since everything is relative, each person is free to pursue his or her individual interests. One boy said that Heinz might steal the drug if he wanted his wife to live, but that he doesn’t have to if he wants to marry someone younger and better-looking (Kohlberg, 1963, p. 24). Another boy said Heinz might steal it because…maybe they had children and he might need someone at home to look after them. But maybe he shouldn’t steal it because they might put him in prison for more years than he could stand. (Colby and Kauffman. 1983, p. 300)
What is right for Heinz, then, is what meets his own self-interests.
You might have noticed that children at both stages 1 and 2 talk about punishment. However, they perceive it differently. At stage 1 punishment is tied up in the child’s mind with wrongness; punishment “proves” that disobedience is wrong. At stage 2, in contrast, punishment is simply a risk that one naturally wants to avoid.
Although stage 2 respondents sometimes sound amoral, they do have some sense of right action. This is a notion of fair exchange or fair deals. The philosophy is one of returning favors–“If you scratch my back, I’ll scratch yours.” To the Heinz story, subjects often say that Heinz was right to steal the drug because the druggist was unwilling to make a fair deal; he was “trying to rip Heinz off,” Or they might say that he should steal for his wife “because she might return the favor some day” (Gibbs et al., 1983, p. 19).
Respondents at stage 2 are still said to reason at the preconventional level because they speak as isolated individuals rather than as members of society. They see individuals exchanging favors, but there is still no identification with the values of the family or community.

Level II. Conventional Morality

Stage 3. Good Interpersonal Relationships.
At this stage children–who are by now usually entering their teens–see morality as more than simple deals. They believe that people should live up to the expectations of the family and community and behave in “good” ways. Good behavior means having good motives and interpersonal feelings such as love, empathy, trust, and concern for others. Heinz, they typically argue, was right to steal the drug because “He was a good man for wanting to save her,” and “His intentions were good, that of saving the life of someone he loves.” Even if Heinz doesn’t love his wife, these subjects often say, he should steal the drug because “I don’t think any husband should sit back and watch his wife die” (Gibbs et al., 1983, pp. 36-42; Kohlberg, 1958b).
If Heinz’s motives were good, the druggist’s were bad. The druggist, stage 3 subjects emphasize, was “selfish,” “greedy,” and “only interested in himself, not another life.” Sometimes the respondents become so angry with the druggist that they say that he ought to be put in jail (Gibbs et al., 1983, pp. 26-29, 40-42). A typical stage 3 response is that of Don, age 13:
It was really the druggist’s fault, he was unfair, trying to overcharge and letting someone die. Heinz loved his wife and wanted to save her. I think anyone would. I don’t think they would put him in jail. The judge would look at all sides, and see that the druggist was charging too much. (Kohlberg, 1963, p. 25)
We see that Don defines the issue in terms of the actors’ character traits and motives. He talks about the loving husband, the unfair druggist, and the understanding judge. His answer deserves the label “conventional “morality” because it assumes that the attitude expressed would be shared by the entire community—”anyone” would be right to do what Heinz did (Kohlberg, 1963, p. 25).
As mentioned earlier, there are similarities between Kohlberg’s first three stages and Piaget’s two stages. In both sequences there is a shift from unquestioning obedience to a relativistic outlook and to a concern for good motives. For Kohlberg, however, these shifts occur in three stages rather than two.
Stage 4. Maintaining the Social Order
Stage 3 reasoning works best in two-person relationships with family members or close friends, where one can make a real effort to get to know the other’s feelings and needs and try to help. At stage 4, in contrast, the respondent becomes more broadly concerned with society as a whole. Now the emphasis is on obeying laws, respecting authority, and performing one’s duties so that the social order is maintained. In response to the Heinz story, many subjects say they understand that Heinz’s motives were good, but they cannot condone the theft. What would happen if we all started breaking the laws whenever we felt we had a good reason? The result would be chaos; society couldn’t function. As one subject explained, I don’t want to sound like Spiro Agnew, law and order and wave the flag, but if everybody did as he wanted to do, set up his own beliefs as to right and wrong, then I think you would have chaos. The only thing I think we have in civilization nowadays is some sort of legal structure which people are sort of bound to follow. [Society needs] a centralizing framework. (Gibbs et al., 1983, pp. 140-41). Because stage 4, subjects make moral decisions from the perspective of society as a whole, they think from a full-fledged member-of-society perspective (Colby and Kohlberg, 1983, p. 27). You will recall that stage 1 children also generally oppose stealing because it breaks the law. Superficially, stage 1 and stage 4 subjects are giving the same response, so we see here why Kohlberg insists that we must probe into the reasoning behind the overt response. Stage 1 children say, “It’s wrong to steal” and “It’s against the law,” but they cannot elaborate any further, except to say that stealing can get a person jailed. Stage 4 respondents, in contrast, have a conception of the function of laws for society as a whole–a conception which far exceeds the grasp of the younger child.

Level III. Postconventional Morality
Stage 5. Social Contract and Individual Rights.
At stage 4, people want to keep society functioning. However, a smoothly functioning society is not necessarily a good one. A totalitarian society might be well-organized, but it is hardly the moral ideal. At stage 5, people begin to ask, “What makes for a good society?” They begin to think about society in a very theoretical way, stepping back from their own society and considering the rights and values that a society ought to uphold. They then evaluate existing societies in terms of these prior considerations. They are said to take a “prior-to-society” perspective (Colby and Kohlberg, 1983, p. 22).
Stage 5 respondents basically believe that a good society is best conceived as a social contract into which people freely enter to work toward the benefit of all They recognize that different social groups within a society will have different values, but they believe that all rational people would agree on two points. First they would all want certain basic rights, such as liberty and life, to be protected Second, they would want some democratic procedures for changing unfair law and for improving society. In response to the Heinz dilemma, stage 5 respondents make it clear that they do not generally favor breaking laws; laws are social contracts that we agree to uphold until we can change them by democratic means. Nevertheless, the wife’s right to live is a moral right that must be protected. Thus, stage 5 respondent sometimes defend Heinz’s theft in strong language:
It is the husband’s duty to save his wife. The fact that her life is in danger transcends every other standard you might use to judge his action. Life is more important than property. This young man went on to say that “from a moral standpoint” Heinz should save the life of even a stranger, since to be consistent, the value of a life means any life. When asked if the judge should punish Heinz, he replied: Usually the moral and legal standpoints coincide. Here they conflict. The judge should weight the moral standpoint more heavily but preserve the legal law in punishing Heinz lightly. (Kohlberg, 1976, p. 38). Stage 5 subjects,- then, talk about “morality” and “rights” that take some priority over particular laws. Kohlberg insists, however, that we do not judge people to be at stage 5 merely from their verbal labels. We need to look at their social perspective and mode of reasoning. At stage 4, too, subjects frequently talk about the “right to
life,” but for them this right is legitimized by the authority of their social or religious group (e.g., by the Bible). Presumably, if their group valued property over life, they would too. At stage 5, in contrast, people are making more of an independent effort to think out what any society ought to value. They often reason, for example, that property has little meaning without life. They are trying to determine logically what a society ought to be like (Kohlberg, 1981, pp. 21-22; Gibbs et al., 1983, p. 83).
Stage 6: Universal Principles.
Stage 5 respondents are working toward a conception of the good society. They suggest that we need to (a) protect certain individual rights and (b) settle disputes through democratic processes. However, democratic processes alone do not always result in outcomes that we intuitively sense are just. A majority, for example, may vote for a law that hinders a minority. Thus, Kohlberg believes that there must be a higher stage–stage 6–which defines the principles by which we achieve justice.
Kohlberg’s conception of justice follows that of the philosophers Kant and Rawls, as well as great moral leaders such as Gandhi and Martin Luther King. According to these people, the principles of justice require us to treat the claims of all parties in an impartial manner, respecting the basic dignity, of all people as individuals. The principles of justice are therefore universal; they apply to all. Thus, for example, we would not vote for a law that aids some people but hurts others. The principles of justice guide us toward decisions based on an equal respect for all.
In actual practice, Kohlberg says, we can reach just decisions by looking at a situation through one another’s eyes. In the Heinz dilemma, this would mean that all parties–the druggist, Heinz, and his wife–take the roles of the others. To do this in an impartial manner, people can assume a “veil of ignorance” (Rawls, 1971), acting as if they do not know which role they will eventually occupy. If the druggist did this, even he would recognize that life must take priority over property; for he wouldn’t want to risk finding himself in the wife’s shoes with property valued over life. Thus, they would all agree that the wife must be saved–this would be the fair solution. Such a solution, we must note, requires not only impartiality, but the principle that everyone is given full and equal respect. If the wife were considered of less value than the others, a just solution could not be reached.
Until recently, Kohlberg had been scoring some of his subjects at stage 6, but he has temporarily stopped doing so, For one thing, he and other researchers had not been finding subjects who consistently reasoned at this stage. Also, Kohlberg has concluded that his interview dilemmas are not useful for distinguishing between stage 5 and stage 6 thinking. He believes that stage 6 has a clearer and broader conception of universal principles (which include justice as well as individual rights), but feels that his interview fails to draw out this broader understanding. Consequently, he has temporarily dropped stage 6 from his scoring manual, calling it a “theoretical stage” and scoring all postconventional responses as stage 5 (Colby and Kohlberg, 1983, p. 28).
Theoretically, one issue that distinguishes stage 5 from stage 6 is civil disobedience. Stage 5 would be more hesitant to endorse civil disobedience because of its commitment to the social contract and to changing laws through democratic agreements. Only when an individual right is clearly at stake does violating the law seem justified. At stage 6, in contrast, a commitment to
justice makes the rationale for civil disobedience stronger and broader. Martin Luther King, for example, argued that laws are only valid insofar as they are grounded in justice, and that a commitment to justice carries with it an obligation to disobey unjust laws. King also recognized, of course, the general need for laws and democratic processes (stages 4 and 5), and he was therefore willing to accept the penalties for his actions. Nevertheless, he believed that the higher principle of justice required civil disobedience (Kohlberg, 198 1, p. 43).
At stage 1 children think of what is right as that which authority says is right. Doing the right thing is obeying authority and avoiding punishment. At stage 2, children are no longer so impressed by any single authority; they see that there are different sides to any issue. Since everything is relative, one is free to pursue one’s own interests, although it is often useful to make deals and exchange favors with others.
At stages 3 and 4, young people think as members of the conventional society with its values, norms, and expectations. At stage 3, they emphasize being a good person, which basically means having helpful motives toward people close to one At stage 4, the concern shifts toward obeying laws to maintain society as a whole.
At stages 5 and 6 people are less concerned with maintaining society for it own sake, and more concerned with the principles and values that make for a good society. At stage 5 they emphasize basic rights and the democratic processes that give everyone a say, and at stage 6 they define the principles by which agreement will be most just.


Developmental Stages: Ken Wilber
Sensorimotor: 0-2 (Archaic and Archaic-Magic):

By the time of birth, the human being has developed from protoplasmic irritability to sensation to perception to impulse to proto-emotion…But none of these functions is yet clearly differentiated (or integrated), and the first years of life are a quick coming-to-terms with the physiosphere (non-biological features of the universe, including stars and planets) and the biosphere (the domain of life, includes but transcends the physiosphere) both within and without, in preparation for the emergence of the noosphere (includes complex sentient life, such as mammals and humans), which begins in earnest around age two with the emergence of language.
Thus Piaget, for example, in speaking of the first year of life, says that “the self is here material, so to speak.” It is still, that is, embedded primarily in the physiosphere. In the first place, the infant cannot easily distinguish between subject and object or self and material environment, but instead lives in a state of “primary narcissism” (Freud) or “oceanic adualism” (Arieti) or “pleromatic fusion” (Jung) or primary “indissociation” (Piaget). The infants self and material environment (and especially the mother) are in a state of primitive nondifferentiation or indissociation. On the psychosexual side, this is the “oral phase” because the infant is coming to terms with food, physical nourishment, life in the physiosphere.
Sometime between the fourth and ninth month, this archaic indissociation gives way to a physical bodyself differentiated from the physical environment—the “real birth” of the individual physical self. Margaret Mahler actually refers to it as “hatching.” The infant bites its thumb and it hurts, bites the blanket and it doesn’t. There is a difference, it learns, between the physical self and the physical other.
Another way to put this is to say that, with this first major differentiation consciousness seats itself in the physical body, grounds itself in the physiosphere…Many researchers…have concluded that if, due to physiological/genetic factors or repeated trauma, consciousness fails to seat itself in the physical self, the result is psychosis of one sort or another. Psychosis is many things…but it certainly includes a failure to establish a rounded physical self clearly differentiated from the environment. The psychotic, R.D. Laing put it, is constantly “jumping out of the body”; he or she cannot easily differentiate where the body stops and the chair begins; subject and object collapse in a state of fusion and confusion, with hallucinatory blurring of boundaries, and so forth. Psychosis, we may say, is a failure to differentiate and integrate the physiosphere.
If all goes relatively well, then the infant transcends the archaic fusion state and emerges or hatches as a grounded self.
The sensorimotor period (0-2) is thus predominantly concerned with differentiating the physical self from the physical environment, and results, toward the end of the second year, in what Piaget calls physical “object permanence,” the capacity of the infant to understand that physical objects exist independently of him or her (i.e., the physical world exists independently of ones egocentric wishes about it).
Thus, out of an initial state of primary indissociation (“protoplasmic,” Piaget also calls it), the physical self and the physical other emerges.
It is through a progressive differentiation that the internal world comes into being and is contrasted with the external. Neither of these two terms is given at the start…During the
early stages the [physical] world and the self are one; neither term is distinguished from the other. But when they become distinct, these two terms begin by remaining very close to each other: the world is still conscious and full of intentions, the self is still material, so to speak, and only slightly interiorized. At every stage there remain in the conception of nature what we might call “adherences,” fragments of internal experience still cling to the external world.
At the end of the sensorimotor period, the physical self and physical other are clearly differentiated, but as the mind begins to emerge with preop, the mental images and symbols themselves are initially fused and confused with the external world, leading to what Piaget calls “adherences,” which children themselves will eventually reject as being inadequate and misleading. We have distinguished [several] varieties of adherences defined in this way. There are, to begin with, during a very early stage, feelings of participation accompanied sometimes by magical beliefs; the sun and moon follow us, and if we walk, it is enough to make them move along; things notice us and obey us, like the wind, the clouds, the night, etc.; the moon, the street lamps, etc., send us dreams “to annoy us,” etc., etc. In short, the world is filled with tendencies and intentions which are [centered on} our own. A second form of adherence, closely allied to the preceding, is that constituted by animism, which makes a child endow things with consciousness and life [oriented solely toward the child]…In this magico-animistic order: on the one hand, we issue commands to things (the sun and the moon, the clouds and the sky follow us), on the other hand, these things acquiesce in our desires because they wish to do so. A third form is artificialism [anthropocentrism]. The child begins by thinking of things in terms of his own “I”: the things around him take notice of man and are made for man; everything about them is willed and intentional, everything is organized for the good of men. If we ask the child, or the child asks himself, how things began, he has recourse to man to explain them. Thus artificialism is based on feelings of participation which constitute a very special and very important class of adherences.
As we shall see, Piaget believes that the major and in many way defining characteristic of all adherences is egocentrism, or an early and initial inability to transcend one’s own perspective and understand that reality is not self-centered. Development proceeds slowly from egocentrism to perspectivism, from realism to reciprocity and mutuality, and from absolutism to relativity:
This formula means that the child, after having regarded his own point of view as absolute,
comes to discover the possibility of other points of view and to conceive of reality as constituted, no longer by what is immediately given, but by what is common to all points of view taken together. One of the first aspects of this process is the passage from realism of perception to interpretation properly so called. All the younger children take their immediate perceptions as true, and then proceed to interpret them according to their own egocentric relations.
The most striking example is that of the clouds and the heavenly bodies, of which children believe that they follow us. The sun and moon are small globes traveling a little way above the level of the roofs of houses and following us about on our walks. Even the child of 6-8 years does not hesitate to take this perception as the expression of truth, and, curiously enough, he never thinks of asking himself whether these heavenly bodies do not also follow other people.
When we ask the cautious question as to which of two people walking in the opposite direction the sun would prefer to follow, the child is taken aback and shows how new the question is to him. [Older children,} on the other hand, have discovered that the sun follows everybody. From this they conclude that the truth lies in the reciprocity of the points of view: that the sun is very high up, that it follows no one…
Piaget is at pains to indicate that the process of differentiation/ integration between internal and external world is a long and slow one. It is not, for example, that magico-animistic beliefs are present at one stage and then completely disappear at the next, but rather that cognitions referred to as “magical” become progressively less and less as development proceeds, moving from a “pure magical autism” to mental egocentricity to reciprocal and mutual sharing. In a very important passage Piaget gets to the heart of the matter.
For the construction of the objective world and the elaboration of strict reasoning both consist in a gradual reduction of egocentricity in favor of…reciprocity of viewpoints. In both cases, the initial state is marked by the fact that the self is confused with the external world and with other
people.; the vision of the world is falsified by subjective adherences, and the vision of other people is falsified by the fact that the personal point of view predominates, almost to the exclusion of all others. Thus in both cases, truth is obscured by the ego. Then, as the child discovers that others do not think as he does, he makes efforts to adapt himself to them, he bows to exigencies of control and verification which are implied by discussion and argument, and thus comes to replace egocentric logic by the logic created by social life. We saw that exactly the same process took place with regard to the idea of reality. There is therefore an egocentric logic and an egocentric ontology, of which the consequences are parallel; they both falsify the perspective of relations and of things, because they both start from the assumption that other people understand us and agree with us from the first, and that things’ revolve around us with the sole purpose of serving us and resembling us.
A note on terminology: Piaget divides each of the major cognitive stages into at least two substages (early and late preop, early and late conop, early and late formop), and I have generally followed Piaget in this regard. Since we have also been using Gebser’s general worldview terminology of archaic, magic, mythic, and mental (with clear implication that they are referring to essentially similar stages), I will often hybridize Gebser’s terminology to match Piaget’s substages, so that we have a continuum of archaic, archaic-magic, mythic, mythic-rational, rational, rational-existential (and into vision-logic, psychic, etc.)…
The preponderance of indissociations and adherences at the sensorimotor and early preoperational have lead Piaget to refer to this general early period as one of “magical cognitions” or “magic proper.” As he explains:
The first [general stage] is that which precedes any clear consciousness of the self, and may be arbitrarily set down as lasting until the age of 2-3, that is, till the appearance of the first “whys,” which symbolize in a way the first awareness of resistance in the external world. As far as we can conjecture, two phenomena characterize this first stage [the overall archaic-magic]. From the [internal] point of view, it is pure autism, or thought akin to dreams and daydreams, thought in which truth is confused with desire. To every desire corresponds immediately an image or illusion which transforms this desire into reality, thanks to a sort of pseudo-hallucination or play. No objective observation or reasoning is possible: there is only a perpetual play which transforms perceptions and creates situations in accordance with the subject’s pleasure [this is a stage that is often eulogized and “elevated” by the Romantics, such as Norman O. Brown, to a “spiritual non dual” state, whereas it is actually, as we have seen, a very egocentric, narcissistic state: operational, not transrational. From the ontological viewpoint, what corresponds to this manner of thinking is primitive psychological causality, probably in a form that implies magic proper: the belief that any desire whatsoever can influence objects, the belief in the obedience of external things. Magic and autism are therefore two different sides of one and the same phenomena—that confusion between the self and the world…
Preoperational (Magic And Magic—Mythic) 2-7 Years
If all goes relatively well, the infant transcends the early archaic fusion state and emerges or hatches as a grounded physical self. But if the infant’s physical body is now separated from the environment, its emotional body is not. The infant’s emotional self still exists in a state of indissociation from other emotional objects, in particular the mothering one. But then, around eighteen months or so, the infant learns to differentiate its feelings from the feelings of others (this is the second major differentiation, or “second fulcrum”). Its own biosphere is differentiated from the biosphere of those around it—in other words, it transcends its embeddedness in the undifferentiated biosphere…
Mahler refers to this crucial transformation (the second fulcrum) as the “separation-individuation phase,” or the differentiation-and-integration of a stable emotional self (whereas the previous fulcrum, as we saw, was the differentiation/integration of the physical self.) Mahler actually calls this fulcrum “the psychological birth of the infant,” because the infant emerges from its emotional fusion with the (m)other.
A developmental miscarriage at this crucial fulcrum (according to Mahler, Kernberg, and others) results in narcissistic and borderline pathologies, because if the infant does not differentiate-separate its feelings from the feelings of those around it, then it is open to being “flooded” and “swept away” by its emotional environment, on the one hand (the borderline syndromes), or it can treat the entire world as a mere extension of its own feelings (the narcissistic condition)—both of which result from a failure to transcend an embeddedness in the undifferentiated biosphere. One remains in indissociation with, or “merged” with, the biosphere, stuck in the biosphere, just as with the previous psychoses one remains merged with or stuck in the physiosphere.
By around age three, if all has gone well, the young child has a stable and coherent physical and emotional self; it has differentiated and integrated, transcended and preserved, its own physiosphere and biosphere. By this time language has begun to emerge, and development in the noosphere begins in earnest.
Thus, the intensity of the early archaic-magic declines with the differentiation of the emotional self and the emotional other (24-36 months)—but, according to Piaget, magical cognitions continue to dominate the entire preoperational period (2-4 years), the period I simply call “magic.”
In other words, the first major layer of the noosphere is magical. During this period, the newly emerging images and symbols do not merely represent objects; they are thought to be concretely part of the things they represent, and thus “word magic” abounds:
Up to the age 4-5, [the child] thinks that he is “forcing” or compelling the moon to move; the relation takes on an aspect of dynamic participation or of magic. From 4-5 he is more inclined to think that the moon is trying to follow him: the relation is animistic. Closely akin to this participation is magical causality, magic being in many respects simply participation: the subject regards his gestures, his thoughts, or the objects he handles, as charged with efficacy, thanks to the very participations which he establishes between those gestures, etc., and the things around him [“adherences”]. Thus, a certain word acts upon a certain thing; a certain gesture will protect one from a certain danger; a certain white pebble will bring about the growth of the water lilies, and so on…
Piaget refers to such magical cognitions as a form of “participation”— that is, the subject and the object, and various objects themselves, are “linked” by certain types of adherences, or felt connections, connections that nonetheless violate the rich fabric of relations actually constituting the object.
This is very much what Freud referred to as the primary process, which is governed by two general laws, that of displacement and that of condensation. In displacement, two different objects are equated or “linked” because they share similar parts or predicates (a relation of
similarity; if one Asian person is bad, all Asians must be bad). In condensation, different objects are related because they exist in the same space (a relation of contiguity: a lock of hair of a great warrior “contains” in condensed form the power of the warrior)…
Put simply, such primary process or magical cognition…does not set whole and part in a rich network of mutual relationships, but short-circuits the process by merely collapsing or confusing various wholes and parts—what Piaget called syncretism and juxtaposition (again, similarity and contiguity). Magical cognition, then, is fused and confused wholes and parts, and not mutually related wholes and parts. These “fused networks” of “syncretic whole” appear very holistic (or “holographic”), but are actually not very coherent and do not even match the already available sensorimotor evidence.
[This] type of relation is participation. This type is more frequent than would at first appear
to be the case, but it disappears after the age of 5-6. It’s first principle is the following: two things between which there subsist relations either of resemblance [similarity; metaphor] or of general affinity [contiguity; metonym], are conceived as having something in common which enables them to act upon one another at a distance, or more precisely, to regard one as a source of emanation, the other as the emanation of the first. Thus air or shadows in a room emanate from the air and shadows out of doors. Thus also dreams, which are sent to us by birds “who like the wind.” [The child] begins, indeed, as we do, by feeling the analogy of the shadow cast by the brook with the shadows of trees, houses, etc. But this analogy does not lead him to identify the particular cases with one another. So that we have here, not an analogy proper, but syncretism. The child argues as follows: “This book makes a shadow; trees, houses, etc., make shadows. The book’s shadow (therefore) comes from the trees and the houses. Thus, from the point of view of the cause or of the structure of the object, there is participation, syncretistic schemas resulting from the fusion of singular terms…
The Shift From Magic to Mythic
As we move from early preoperational (2-4 years; “magic”) to late peroperational (4-7 years; “magic-mythic”), similar types of adherences continue to dominate awareness. But one crucial difference comes to the fore: magic proper—the belief that the subject can magically alter the object—diminishes rapidly. Continued interaction with the world eventually leads the subject to realize that his or her thoughts do not egocentrically control, create, or govern the world. The “hidden linkages” don’t hold up in reality.
Magic proper thus diminishes, or rather, the omnipotent magic of the individual subject—a magic that no longer “works”—is simply transferred to other subjects. Maybe I can’t order the world around, but Daddy (or God or the volcano spirit) can.
And thus onto the scene come crashing a hundred gods and goddesses, all capable of doing what I can no longer do: miraculously alter the patterns of nature in order to cater to my wants. Whereas in the earlier magical stages proper, the secret of the universe was to learn the right type of magic that would directly alter the world, the focus now is to learn the right rituals and prayers that will make the gods and goddesses intervene and alter the world for me. Piaget:
The possibility of miracles is, of course, admitted, or rather, miracles form the part of the child’s conception of the world, since law [at this stage] is a moral thing with the possibility of numerous exceptions [“suspended by God” or a powerful other]. Children have been quoted who asked their parents to stop the rain, to turn spinach into potatoes, etc.
Thus the shift from magic to magic-mythic. Piaget: “The first stage is magical: we make the clouds move by walking. The cloud obeys us at a distance. The average age of this stage is 5. The second stage [magic-mythic] is both artificialist and animistic. Clouds move because God or [other] men make them move. The average age of this stage is 6.” It is from this magic-mythic structure that so many of the world’s classical mythologies seem in large part to issue. As Phillip Cowan points out, “During the [late preop or magic-mythic] stage, there is still a confusion between physical and personal causality; the physical world appears to operate much the way people do. All of these examples [show that the late preop] children already have developed elaborate mythologies about cosmic questions such as the nature of life (and death) and the cause of wind [and so forth}. Further, these mythologies show many similarities from child to child across cultures and do not seem to have been directly taught by adults.”
Myth And Archetype
This directly brings us, of course, to the work of Carl Jung and his conclusion that the essential forms and motifs of the world’s great mythologies—the “archaic forms” or “archetypes”—are inherited in the individual psyche of each of us.
It is not often realized that Freud was in complete agreement with Jung about the existence of this archaic heritage. Freud was struck by the fact that individuals in therapy kept reproducing essentially similar “phantasies,” phantasies that seemed therefore somehow to be collectively inherited. “Whence comes the necessity for these phantasies and the material for them?” he asks. “How is it to be explained that the same phantasies are always formed with the same content? I have an answer to this which I know will seem to you very daring. I believe that these primal phantasies are a phylogenetic possession. In them the individual stretches out to the experiences of past ages.”
This phylogenetic or “archaic heritage” includes, according to Freud, “abbreviated repetitions of the evolution undergone by the whole human race through the long-drawn-out periods and from the pre-historic ages.” Although, as we will see, Freud and Jung differed profoundly over the actual nature of this archaic heritage, Freud nevertheless made it very clear that “I fully agree with Jung in recognizing the existence of this phylogenetic heritage.”
Piaget has also written extensively on his essential agreement with and appreciation of Jung’s work. But he differs with Jung in that he does not see the archetypes themselves as being directly inherited from past ages, but rather as being the secondary by-products of cognitive structures which themselves are similar where ever they develop and which, in interpreting a common physical world, generate common motifs.
But whether we follow Freud, Jung, or Piaget, the conclusion is essentially the same: all the world’s great mythologies exist today in each of us, in me and in you. They are produced, and can at any time be produced, by the archaic, the magic, and the mythic structures of our own compound individuality (and classically by the magic-mythic structure).
The question then centers—and here Freud and Jung bitterly parted ways—on the nature and function of these mythic motifs, these archetypes. Are they merely infantile and regressive (Freud), or do they also contain a rich source of spiritual wisdom (Jung)? Piaget, needless to say, sided with Freud on this particular issue. I have already suggested that I do not see these particular “archetypes” as being quite the high source of transpersonal wisdom that Jung
believed; but the situation is very subtle and complex, and we will return to it later…in connection with Joseph Campbell…
Campbell, we will see, believes that in certain circumstances…the early mythic archetypes can carry profound religious and spiritual meaning and power. But even Campbell clearly acknowledges (and indeed stresses) that the early and late preoperational stages themselves are both marked by a great deal of egocentrism, anthropocentrism, and geocentrism.
Put differently, still lying “close to the body,” preoperational cognition does not easily take the role of other, nor does it still clearly differentiate the noosphere and the biosphere. Even in late preoperational thinking, the child firmly believes that names are a part of, or actually exist in, the objects named. “What are names for?” a child of five was asked. “They are what you see when you look at things.” “Where is the name of the sun?” “Inside the sun.” As one child summarized it: “If there weren’t any words it would be very bad. You couldn’t make anything. How could things have been made?” Joseph Campbell comments:
In the cosmologies of archaic man, as in those of infancy, the main concern of the creator was in the weal and woe of man. Light was made so that we should see; night so that we might sleep; stars to foretell the weather; clouds to warn of rain. The child’s view of the world is not only geocentric, but egocentric. And if we add to this simple structure the tendency recognized by Freud, to experience all things in association with the subjective formula of the family romance[Oedipus/Electra], we have a rather tight and very slight vocabulary of elementary ideas, which we may expect to see variously inflected and applied in the mythologies of the world.
The emergence of the noosphere: First images (at around 7 months), then symbols (the first full-fledged symbol probably being the word “no!”), then concepts (around 3-4 years), all aided immeasurably by the emergence of language.
“No” is the first form of specifically mental transcendence. Images begin this mental transcendence, but images are tied to their sensory referents. With “no” I can for the first time decline to act on my bodily impulses or on your desires (which every parent discovers in the child during the “terrible twos”). For the first time in development, the child can begin to transcend its merely biological or biocentric or egocentric embeddedness, begin to exert control over bodily desires and bodily discharges and bodily instincts, while also “separating-individuating” itself from the will of others. The Freudian fuss over “toilet training” and the “anal phase” simply refers to the fact that a mental-linguistic self is beginning to emerge and beginning to exert some type of conscious will and conscious control over its spontaneous biospheric productions, and over its being “controlled” by others as well.
In short, it is only with language that the child can differentiate its mind and body, differentiate its mental will and its bodily impulses, and then begin to integrate its mind and body. This is the third major differentiation, or the third fulcrum. The failure to differentiate mind and body—the failure to transcend this stage—is another way to say “remains stuck in the body or the biosphere,” which…is the primary developmental lesion underlying the narcissistic/borderline pathologies.
But “no!” can go too far, and therein lies all the horrors of the noosphere. For it is indeed with language that the child can differentiate mind and body, differentiate the noosphere and the biosphere, that differentiation (as always) can go too far and result in disassociation. The mind does not just transcend and include the body, it represses the body, represses its sensuality, represses its sexuality, represses its rich roots in the biosphere. Repression, in the Freudian (and Jungian) sense, comes into existence only with the “language barrier,” with a “no!” carried to extremes. And the result of this extreme “no!” is technically called “neurosis” or “psychoneurosis.”
Every neurosis, in other words, is a miniature ecological crises. It is a refusal to include in the compound individual some aspect of organic life, emotional-sexual life, reproductive life, sensuous life, libidinal life, biospheric life. It is a denial of our roots and our foundation. Neurosis, in this sense, is an assault on the biosphere by the noosphere…neurotic symptoms—anxieties and depressions and obsessions…now (forces) itself into consciousness in hidden forms, attempts to get the noosphere off its back.
And the neurotic symptoms disappear, or are healed, only as consciousness relaxes its repression, recontacts and befriends the biosphere that exists in its own being, and then reintegrates that biosphere with the newly emergent noosphere…This is called “uncovering the shadow,” and the shadow is…the biosphere.
Thus, if remaining stuck in the biosphere results in the borderline/ narcissistic conditions, going to the other extreme and alienating the biosphere results directly in the psychoneuroses. It follows that our present-day worldwide ecological crises is, in the very strictest sense of the terms, a worldwide collective neuroses—and is about to result in a worldwide nervous breakdown.
This crises is…in no way going to “destroy the biosphere”—the biosphere will survive, in some form or another (even if just viral and bacterial), no matter what we do to it. What we are doing, rather, is altering the biosphere in a way that will not support higher life forms and especially will not support the noosphere. That ‘alteration’ is, in fact, a repression, an alienation, a denial of our common ancestry, a denial of our relational existence with all of life. It is not a destruction of the biosphere but a denial of the biosphere, and that is the precise definition of psychoneurosis.
What Freud found his patients doing on a couch in Vienna, we have now collectively managed to do to the world at large. And who shall be our doctor?
Concrete Operational (7-12, Mythic And Mythic-Rational)
Assuming development goes relatively smoothly, then with the first significant differentiation of the mind and the body, the mind can transcend its embeddedness in a merely bodily orientation—absorbed in itself (egocentric)—and begin to enter the world of other minds. But to learn to do so it must learn to take the role of other—a new, emergent, and very difficult task.
In other words, the self has gone from a physiocentric identity (first fulcrum) to a biocentric identity (second fulcrum) to an early noospheric identity (third fulcrum), all of which are thoroughly egocentric and anthropocentric (magic and magic-mythic all centered on the self and oriented exclusively to the self, however “otherworldly” or “sacred” it might all appear).
If the sensorimotor and preoperational world is egocentric, the concrete operational world is sociocentric (centered not so much on a bodily identity as on a role identity, as we shall see). It still contains “mythic” and “anthropocentric” elements because, as Cowan puts it, “there are still various colorings of the previous stages” (which is why I call early and late conop, respectively, mythic and mythic-rational). A more differentiate causation by “five elements” (water, earth, fire, ether) tends to replace more syncretic explanation, and there often emerges a belief in causation by “preformation” (the acorn contains a fully formed but miniature oak tree).
But by far the most significant transformation or transcendence occurs in the capacity to take the role of other—not just to realize that others have a different perspective, but to be able to mentally reconstruct that perspective, to put oneself in the other’s shoes.
In what became known as the Three Mountains Task, Piaget exposed children from four to twelve years old to a play set that contained three clay mountains, each of a different color, and a toy doll. The questions were simple: what do you see, and what does the doll see?
The typical response of the preoperational child is that the doll sees the same thing that the child is looking at, even if the doll is facing only, say, the green mountain. The child does not understand that there are different perspectives involved. At a later stage of preop, the child will correctly indicate that the doll has a different perspective, but the child cannot say exactly what it is.
But with the emergence of concrete operational, the child will easily and readily describe the true perspective of the doll (e.g., “I am looking at all three mountains, but the doll is only looking at the green mountain”).
Investigation of these and similar tasks…has confirmed the general conclusion: only with the emergence of concrete operational thought can the child transcend his or he egocentric perspective and take the role of other. As Habermas would put it, a role identity supplements a natural (or bodily) identity (the body cannot take the role of other). The child learns his or her role in a society of other roles, and must now learn to differentiate that role from the role of others and then integrate that role in the newly emergent worldspace (this is the fourth major fulcrum, the fourth major differentiation/integration of self-development). The fundamental locus of self-identity thus switches from egocentric to sociocentric.
Initially the child is indissociated from his or her role, is embedded or “stuck” in it (just as he or she was initially stuck in the physiosphere and then stuck in the biosphere). This unavoidable (and initially necessary) “sociocentric embeddedness” leas to what is variously know as the conventional stages of morality (Kohlberg/Gilligan), the belongingness needs (Maslow), the conformist mode (Loevinger).
Which is why pathology at this stage is known generally as “script pathology.” One is having trouble not with the physiosphere (psychoses), not with the biosphere (borderline and neuroses)—rather, one is stuck in the early roles and scripts given by one’s parents, one’s society, one’s peer group: scripts that are not, and initially cannot be, checked against further evidence, and therefore scripts that are often outmoded, wrong, even cruel (“I’m no good, I’m rotten to the core, I can’t do anything right,” etc.; these do not so much concern bodily impulses, as in the psychoneuroses, but rather social judgments about one’s social standing, one’s role).
Therapy here involves digging up these scripts and exposing these myths to the light of more mature reason and more accurate information, thus “rewriting the script.” (This is, for example,
the primary approach of cognitive therapy and interpersonal therapy; not so much the digging up of buried and alienated bodily impulses, as important as that may be, but replacing false and distorting cognitive maps with more reasonable judgments).
Equally important to the taking of roles is the capacity of conop to work with mental rules. We saw that preop works with images (pictorial representation), symbols (nonpictorial representation), and concepts (which represent an entire class of things). Rules go one step further and operate upon concrete classes, and thus these rules (like multiplication, class inclusion, hierarchization) begin to grasp the incredibly rich relationships between various wholes and parts.
That is, concrete operational is the first structure that can clearly grasp the nature of a holon, of that which in one relationship is a whole and at the same time in another is merely a part (which is why value holarchies start to emerge spontaneously at that point; they switch from the rather strong “either-or” desires of preop to a continuum of preferences). All of this, of course, depends upon the capacity of conop to begin to take different perspectives and relate those perspectives to each other.
Because of its capacity to operate with both rules and roles, I also call this structure the rule/role mind. In relation to the previous stages, it represents a greater transcendence, a greater autonomy, a greater interiority, a higher and wider identity, a greater consciousness, but one that, as in all previous stages, is initially “captured” by the self and the objects—now a social self and now social objects (roles)—that dominate this stage.
And thus a self now open to new and higher pathologies, which demand new and different therapies. No longer stuck in the physiosphere, stuck in the biosphere, or stuck in the early “egosphere,” the pathological self is here stuck in the sociosphere, embedded in a particular society’s rules and myths and dogmas, with no way to transcend that mythic-membership, and thus destined to play out the roles and rules of a particular and isolated society.
Mythic-membership is sociocentric and thus ethnocentric: one is in the culture ( a member of the culture) if one accepts the prevailing mythology, and one is excommunicated from the culture is the belief system is not embraced. In this structure, there is no way a global or planetary culture can even be conceived unless it involves the imposing of one’s particular mythology on all people’s: which is just what we saw with the mythic imperialism of the great empires, from the Greek and Roman to the Khans and Sargons to the Incas and Aztecs. These great empires all overcame the egocentrism of local and warring tribes by subsuming their regimes into that of the empire (thus negating and preserving them in a larger reach or communion), and this was accomplished in part…the umbrella of a mythology that unified different tribes, not by blood or kinship (for that is impossible, since each tribe has a different lineage), but rather by a common mythological origin that could unite the various roles (as the twelve Tribes of Israel were united by a common Yahweh).
But as mature eogic-rationality begins to emerge, ethonocentric gives way to worldcentric.
The Ego
We saw that Habermas referred to the transcendence from conop to formop as a transformation from a role identity to an ego identity. “Ego” here doesn’t mean “egocentric”’; on the contrary, it means moving from a sociocentric to a worldcentric capacity, a capacity to distance oneself from one’s egocentrtic and ethnocentric embeddedness and consider what would be fair for all peoples and not merely one’s own.
It would be helpful, then, to discuss the meaning of the word ego. Particularly in transpersonal circles, no word has caused more confusion. Ego, along with rationality, is generally the dirty word in mystical, transpersonal, New Age circles, but few researchers seem even to define it, and those who do, do so differently.
We can, of course, define ego any way we like as long as we are consistent. Most New Age writers use the term very loosely to mean a separate-self sense, isolated from others and from a spiritual Ground. Unfortunately, these writers do not clearly distinguish states that are pre-egoic from those that are transegoic, and thus half of their recommendations for salvation are often recommendations for various ways to regress, and this rightly sends alarms through orthodox researchers. Nontheless, their general conclusion is that all truly spiritual states are ‘beyond ego,” which is true enough as far as it goes, but which terribly confuses the picture unless it is carefully qualified.
In most psychoanalytically oriented writers, the ego has come to mean “the process of organizing the psyche,” and in this regard many researchers, such as Heinz Kohut, now prefer the more general term self. The ego (or self), as the principle that gives unity to the mind, is thus a crucial and fundamental organizing pattern, and to try to go “beyond ego” would mean not transcendence but disaster, and so these orthodox theorists are utterly perplexed by what “beyond ego” could possibly mean, and who could possibly desire it—and, as far as that definition goes, they too are quite right.
Furthermore, according to such philosophers as Fichte, this pure Ego is one with absolute Spirit, which is precisely the Hindu formula Atman=Brahman. To hear Spirit described as pure Ego often confuses New Agers, who generally want ego to mean only “the devil” (even though they heartily embrace the identical notion Atman=Brahman).
They are equally confused when someone like Jack Engler, a theorist studying the interface of psychiatry and meditation, states that “meditation increases ego strength,” which it most certainly does, because “ego strength” in the psychiatric sense means “capacity for disinterested witnessing.” But the New Agers think that meditation means “beyond ego,” and thus anything that strengthens the ego is simply more of the devil. And so the confusions go.
Ego is simply Latin for “I.” Freud, for example, never used the term ego; he used the German pronoun das Ich, or “the I,” which was unfortunately translated as the ‘ego.” And contrasted to “the I” was what Freud called the Es, which is German for “it,” and which, also unfortunately, was translated as the “id” (Latin for “it”), a term Freud never used. Thus Freud’s great book The Ego and the Id was really called “The I and the It.” Freud’s point was that people have a sense of I-ness or selfness, but sometimes part of their own self appears foreign, alien, separate from them—appears, that is, as an “it” (we say, “The anxiety, it makes me uncomfortable,” or “The desire to eat, it’s stronger than me!” and so forth, thus relinquishing responsibility for our own states). When parts of the I are split off or repressed, they appear as symptoms or “its” over which we have no control.
Freud’s basic aim in therapy was therefore to reunite the I and the it and thus heal the split between them. His most famous statement of the goal of therapy—“Where id was, there ego shall be”—actually reads, “Where it was, there I shall be.” Whether one is a Freudian or not, this is still the most accurate and succinct summary of all forms of uncovering psychotherapy, and it simply points to an expansion of ego, an expansion of I-ness, into a higher and wider identity that integrates previously alienated processes.
The term ego obviously can be used in a large number of quite different ways, from the very broad to the very narrow, and it is altogether necessary to specify which usage one intends, or else interminable arguments arise that are generated only by an arbitrary semantic choice.
In the broadest sense, ego means “self” or “subject,” and thus when Piaget speaks of the earliest stages being “egocentric,” he does not mean that there is a clearly differentiated self or ego set apart from the world. He means just the opposite: the self is not differentiated from the world, there is no strong ego, and thus the world is treated as an extension of the self, “egocentrically.” Only with the emergence of a strong and differentiated ego (which occurs from the third to the fifth fulcrums, culminating in formop, or rational perspectivism)—only with the emergence of the mature ego does egocentrism die down! The “pre-egoic” stages are the most egocentric!
Thus, it is only at the level of formal operational thought that a truly strong and differentiated self or ego emerges from its embeddedness in bodily impulses and pre-given social roles; and that, indeed, is what Habermas refers to as ego identity, a fully separated-individuated sense of self.
To repeat: the “ego,” as used by psychoanalysis, Piaget, and Habermas (and others), is thus less egocentric than its pre-eogic predecessors!
I will most often use the term ego in that specific sense, similar to Freud, Piaget, Habermas, and others—a rational, individuated sense of self, differentiated from the external world, from its social roles (and the superego), and from its internal nature (id).
In this usage, there are pre-egoic realms where the self is poorly differentiated from the internal and external world (there is only “ego nuclei,” as psychoanalysis puts it). These pre-egoic realms are, to repeat, the most egocentric (since the infant or child doesn’t have a strong ego, it thinks that the world feels what it feels, wants what it wants, caters to its every desire: it does not clearly separate self and other, and thus treats the other as an extension of the self).
The ego begins to emerge more stably in the mythic stage (as a persona or role) and finally emerges, in the formal operational stage, as a self clearly differentiated from the external world and from it various roles (personae), which is the culmination of the overall egoic realms. Higher developments into more spiritual realms are then referred to as being transegoic, with the clear understanding that the ego is being negated but also preserved (as a functional self in conventional reality). The self in these higher stages I will refer to as the Self (and not the pure Ego, unless otherwise indicated, because this confuses everybody), and I will explain all of that in more detail as we proceed.
These three large realms are also referred to, in very general terms, as the subconscious (pre-egoic), the self-conscious (egoic), and the superconscious (transegoic); or as the prepersonal, the personal, and the transpersonal; or as the prerational, the rational, and the transrational.
The point is that each of those stages is a lessening of egocentrism as one moves closer to the pure Self. The maximum of egocentrism, as Piaget demonstrated, occurs in the primary or physical indissociation (the first fulcrum, where self-identity is physiocentric), because the entire material world is absorbed in the self-sense and cannot even be considered apart from the self-sense. This archaic-autistic stage is not “one with the entire world in bliss and joy,” as many Romantics think, but a swallowing of the material world into the self: the child is all mouth, and everything else is merely food.
As identity switches from the physiocentric to biocentric or ecocentric (fulcrum-2), there is a lessening of “pure autism” (“self-only!”) but a blossoming of emotional narcissism or emotional egocentrism (fulcrum-2), which Mahler summarized as “narcissism at its peak!” (She also summarized it as “the world is the infant’s oyster”: grandiose-omnipotent fantasies). The emergence of preop mind (fulcrum-3) is a lessening of that emotional egocentrism, but a blossoming of egocentric (and geocentric) magic—less primitive than the previous stage, but still shot through with egocentric adherences: the world exists centered on humans.
The emergence of the conop mind (fulcrum-4) is a lessening of that egocentric magic (where the self is central to the cosmos), but it is replaced with ethnocentrism, where one’s particular group, culture, or race is supreme. Nontheless, at the same time this allows the beginning of what Piaget calls decentering, where one can decenter or stand aside from egocentrism of the early mind and instead take the role of other, and this comes to a fruition with a further decentering, a further lessening of egocentrism, in formal operational (where one can take the perspective, not just of others in one’s group, but of others in other groups: worldcentric or non-ethnocentric).
As we will see when we follow evolution into the transpersonal domain, these developments converge on an intuition of the Divine as one’s very Self, common in and to all peoples (in fact, all sentient beings), a Self that is the great omega point of this entire series of decreasing egocentrism, of decentering from the small self in order to find the big Self—a Self common in and to all beings and thus escaping the egocentrism (and ethnocentrism) of each. The completely decentered self is the all-embracing Self (as Zen would say, the Self that is no-self).
Formal Operational (12-17+)
At this point, we are tracing the emergence of a strong rational ego out of its embeddedness in mythic-membership, and this brings us to Piaget’s formal operational stage.
Formal operational awareness transcends but includes concrete operational thought, and thus formop can operate upon the holons that constitute conop—and that, in fact, is the primary definition of formal operational. Where concrete operational uses rules of thought to transcend and operate on the concrete world, formal operational uses a new interiority to transcend and operate on the rules of thought themselves. It is a new differentiation allowing a new integration (and a deeper and wider identity).
First and foremost, formal operational awareness brings with it a new world of feelings, of dreams, of wild passions and idealistic strivings. It is true that rationality introduces a new and more abstract understanding of mathematics, logic, and philosophy, but those are all quite secondary to the primary and defining mark of reason: reason is a space of possibilities, possibilities not tied to the obvious, the given, the mundane, the profane. Reason, we said earlier, is the great gateway to the unseen, the beginning of the invisible worlds, which is usually the last way people think of rationality.
But think of the great mystics such as Plato and Pythagoras, who saw rational Forms or Ideas as the grand patterns upon which all of manifestation was based, patterns that were utterly invisible to the eye of flesh and could only be seen interiorly, with the eye of the mind. Or think of the great physicists such as Heisenberg and Jeans, who maintained that the ultimate building blocks of the universe are mathematical Forms, also seen only with the mind’s eye. Or of the great Vedantin and Mahayana sages, who maintained that the entire visible world is just a precipitate of the mind’s interior Forms or “seed-syllables.” For all of these theorists, Reason was not an abstraction from the concrete physical world; rather, the concrete world was a reduction or condensation of the great Forms lying beyond the grasp of the senses, Forms which contained en potentia all possible manifest worlds.
Piaget approaches this whole topic by showing that, whereas the concrete operational child can indeed operate upon the concrete world, the child at that stage ultimately remains tied to the obvious and the given and the phenomenal, whereas the formal operational adolescent will mentally see various and different possible arrangements of the given.
(In) a typical Piagetian experiment a child is presented with five glasses which contain colorless liquids. Three of the glasses contain liquids that, if mixed together, will produce a yellow color. The child is asked to produce a yellow color.
The preop child will randomly combine a few glasses, then give up. If she accidentally hits upon the right solution, she will give a magical explanation (“The sun made it happen”; “It came from the clouds”).
The conop child will eagerly begin by combining the various glasses, three at a time. She does this concretely; she will continue the concrete mixing until she hits upon the right solution or eventually gets tire and quits.
The formop will begin by telling you that you have to try all the possible combinations of three glasses. She has a mental plan or formal operation that lets her see, however vaguely, that all possible combinations have to be tried. She doesn’t have to stumble through the actual concrete operations to understand this. Rather, she sees, with the mind’s eye, that all possibilities must be taken into account.
In other words, this is a very relational type of awareness: all the possible relations that things can have with each other need to be held in awareness—and that is radically new. This is not the “wholeness” of syncretic fusion, where the integrity of wholes and parts is violated in a magical fusion, but rather a relationship of mutual interaction and mutual interpenetration, where wholes and parts, while remaining perfectly discrete and intact, are also seen to be what they are by virtue of their relationships to each other. The preop child, and to a lesser extent the conop child, thinks the color yellow is a simple property of the liquids; the formop adolescent understands that the color is a relationship of various liquids.
Formal operational awareness, then, is the first truly ecological mode of awareness, in the sense of grasping mutual interrelationships. It is not embedded in ecology (it transcends ecology without denying it), and thus can reflect on the web of relationships constituting it. As various researchers have pointed out, to use Cowan’s particular phrasing: “Again the emphasis in formal schemes is on the coordination of [various] systems. Not only can adolescents [at this stage] observe and reason about changes in the interior of [and individual], they can also be concerned with the reciprocal changes in the surrounding environment. Only then, for example, will they be able to conceptualize an ecological system in which changes in one aspect may lead to a whole system of changes in the balance between other aspects of nature.
So the first formulation is : formal operational = ecological
The fact that formop is also strong enough to potentially repress the biosphere, resulting in ecological catastrophe, indicates merely that ecological catastrophe is an unfortunate possibility, but not an inherent component, of rationality. We want to tease apart the pathological manifestations of any stage from its authentic achievements, and celebrate the latter even as we try to redress the former. The fact that ecological awareness becomes even greater at the next stage, the centauric, should not detract from the fact that it begins here, with the formal operational understanding of mutual relationships.
The second equation we need is: formal operational = understanding of relativity.
The capacity to take different perspectives, we saw, begins in earnest with conop. But with the emergence of formop, all the various perspectives can be held in mind, however loosely, and thus all of them become relative to each other. “In a set of experiments, a snail moves along a board, which itself is moving along a table. Only children at the formal operational stage can understand the distance which the snail travels relative to the board and to the table. Here we find the intellectual equipment necessary for conceptions of relativity—that time taken or space traveled cannot be absolute, but must be measured relative to some arbitrary point.”
The third equation we need is: formal operational = non-anthropocentric
The egocentric, geocentric, anthropocentric notions of reality, so prevalent in the earlier magic and mythic stages, and so defining of those stages, finally begin to wind down and lose their grip on awareness…
And not merely egocentrism but sociocentrism or ethnocentrism begins to wind down. With the coming of formop, the rules and the norms of any given society can themselves be reflected upon and judged by more universal principles, principles that apply not just to this or that culture, or this or that tribe, but to the multiculturalism of universal perspectivism. Not “My country right or wrong,” but “Is my country actually right?” Not concrete moral rules such as the Ten Commandments (“Thou shalt have no other gods before me”—intertribal squabbling), but more universal statements, principles of justice and mercy and compassion, of reciprocity and equality, based on mutual respect for individuals and the dictates of conscience based on rights and responsibilities…
Thus Kohlberg, Gilligan, and Habermas all refer to this general stage as postconventional…Socrates vs. Athens. Martin Luther King, Jr., vs. segregation. Gandhi vs. cultural imperialism.
Thus, we have seen moral development move from a preconventional orientation, which is strongly egocentric, geocentric, biocentric, narcissistic, bound to the body’s feelings and nature’s
impulses (the first three fulcrums), to a conventional or sociocentric or ethnocentric orientation bound to one’s society, culture, tribe, or race, to a postconventional or worldcentric orientation, operating in the space of universal pluralism and global grasp…
For all these reasons, the individual at this stage, who can no longer rely on society’s given roles in order to establish an identity, is thus thrown back on his or her own resources. Who am I?” becomes, for the first time, a burning question, and the self-esteem needs to emerge from the belongingness needs (Maslow), or a “conscientious” self emerges from a “conformist” mode (Lovinger).
A failure to negotiate this painfully self-conscious ;phase (fulcrum 5”)—a differentiation from ethnocentrism and sociocentrism—results in the characteristic pathology of this stage, which Erickson called an “identity crisis.” This is not a problem of merely finding an appropriate role in society (that would be script pathology); it is one of finding a self that may or may not fit with society at all (Thoreau on civil disobedience comes to mind).
In addition to formal operational awareness being ecological, relational, and nonanthropocentric, we have already mentioned several of its other properties: it is the first structure that is highly reflexive and highly introspective; it is experimental and relies on evidence to settle issues; it is universal as pluralism and perspectivism; and it is propositional (can understand “what if” and “as if” statements; the fact that formop is the first structure to grasp “as if” statements turns out to be extremely important when it comes to interpreting mythology, as we will see in the following section on Joseph Campbell). But all of these are just variations on the central theme: reason is a space of possibilities.
No wonder adolescence and the emergence of formop is a time of wild passions and explosive idealisms, of fantastic dreaming and heroic urges, of utopian yells and revolutionary upsurge, of desires to change the entire world and idealistically straighter it all out, of feelings and emotions unleashed from the merely given and offered instead the space of all possibilities, a space through which they roam and rampage with love and passion and wildest terror. And all of this, all of it, comes from being able to see the possibilities of what might be, possibilities seen only with the mind’s eye, possibilities that point toward worlds not yet in existence and worlds not yet really seen, the great, great doorway to the invisible and the beyond, as Plate, and Pythagoras, and Shankara, and every mystic worth his or her salt has always known.
The higher the developments do indeed lie beyond reason, but never beneath it.
Joseph Campbell
There is no greater friend of mythology than Joseph Campbell, and I mean that in a good sense. In a series of articulate and extremely well-researched books, Campbell has done more than any other person, with the possible exception of Mircea Eliade and Carl Jung, to champion the position that mythological thought is the primary carrier of spiritual and mystical awareness. I and countless researchers have drawn on his works time and again, and his meticulous scholarship and detailed analysis never fail to inspire…
And yet his position, I believe, is finally untenable, and can be demonstrated to be so using his own assumptions and his own conclusions. For his position is, in the last analysis, a form of elevationism, and it is necessary to face this directly if we are ever to make sense of the truly deeper or higher developments of genuine spiritual and mystical experience. For one of the best ways to know what authentic mystical experience is, is to know what it is not.
To begin with, Campbell openly accepts the essentials of the Piagetian system. That is, he accepts the fact that the basic motifs of mythological thought are produced by the infantile and childhood structures of preop and early conop, and he explicitly say so using Piagetian terms. As just one of hundreds of instances:
The two orders—the infantile and the religious—are at least analogous, and it may well be that the latter is simply a translation of the former to a sphere out of range of critical observation [reason]. Piaget has pointed out that although the little myths of genesis invented by children to explain the origins of themselves and of things may differ, the basic assumption underlying all is the same: namely, that things have been made by someone, and that they are alive and responsive to the commands of their creators. The origin myths of the world’s mythological systems differ too; but in all the conviction is held [as in childhood], without proof, that the living universe is the handiwork…of some mother-father God [artificialism/anthropocentrism].
These three principles [magical participation, animism, and anthropocentrism] may be said to constitute the axiomatic, spontaneously supposed frame of reference of all childhood experience, no matter what the local details of this experience happen to be. And these three principles, it is no less apparent, are precisely those most generally represented in the mythologies and religious systems of the world.
Campbell cheerfully, and even enthusiastically acknowledges all of this, and he does so because he has a plan. He has a plan, this is, to salvage mythology, to prove that mythology is “really” religious and genuinely spiritual, and is not, in fact, merely a device of childhood.
The plan is this: The mythological productions of preop and conop, he says, are always taken literally and concretely, a point I have also been at pains to emphasize. But, Campbell says, in a very few individuals, the myths are not taken literally, but are rather taken in an “as if” fashion (his terms), in a playful fashion that releases one from the concrete myth and ushers one into more transcendental realms.
And this, he says, is the real function of myth, and therefore this is how all myths have to be judged. For the masses it remains true that myth is an illusion, a distortion, an infantile and childish approach to reality (all his phrases), but for the very few who can see through them, myths become the gateway to the genuinely mystical. And he belabors the point that myths, not reason, alone can do this, and this is their wonderful function. And here he starts running into grave difficulties.
When myths are take concretely and literally, Campbell says, they serve the mundane function of integrating individuals into the society and worldview of a given culture, and in that ordinary function, he says, they serve no spiritually transcendental or mystical purpose at all, which is true enough. I myself see that mundane integration as the central, enduring, and extremely important function of myths at that stage of development—simple cultural meaning and correlative social integration (at a preop and conop level).
Campbell acknowledges that function, but since he is looking for a way to elevate myths to a transpersonal status, those functions become quite secondary to him. In fact, he says, when people take myths literally—which, he says,99.9% of mythic believers do—then those myths become distorted. He is emphatic about this: “It must be conceded, as a basic principle of our natural history of the gods and heroes, that whenever a myth has been taken literally its sense has been perverted.”
Let us ignore, for the moment, that this implies that 99.9% of mythic believers are perverted (instead of stage-specifically quite adequate), and look instead to those very few individuals who do not take myth literally but rather in an “as if” fashion. By “as if” Campbell explicitly means the use given to it by Kant in his Prolegomena to Every Future System of Metaphysics, where Kant says that we can hold our knowledge of the world in an “as if” or “possible realities” fashion. Campbell then drives to the heart of his argument:
I am willing to accept the world of Kant, as representing the view of a considerable metaphysician. And applying it to the range of festival games and attitudes just reviewed [by which he means the attitude that does not take the myth seriously or literally]—from the mask of the consecrated host and temple image, transubstantiated worshiper and transubstantiated world—I can see, for believe I can see, that the principle of release operates throughout the series by way of an “as if”;and that, through this, the impact of all so-called “reality” upon the psyche is transubstantiated.
In other words, a myth is being a “real myth” when it is not being taken as true, when it is being held in an “as if” fashion. And Campbell knows perfectly well that an “as if” stance is possible only with formal operational awareness. Thus, according to his own conclusions, a myth offers its “release” only when it is transcended by, and held firmly in, the space of possibilities and as-ifs offered by rationality. It is reason, and reason alone, that can release myth from its concrete literalness and hold it in a playful, as-if, what-if fashion, using it as an analogy of what higher states might be like, which is something that myth, by itself, could never do (as Campbell bizarrely concedes).
It is people such as Campbell and Jung and Eliade, operating from a widespread access to rationality something the originators of myth did not have—who then read deeply symbolic “as ifs” into them, and who like to play with myths and use them as analogies and have great good fun with them, whereas the actual myth-believers do not play with the myths at all, but take them deadly seriously and refuse in the least to open them to reasonable discourse of any sort of “as if” at all.
In short, a myth serves Campbell’s main function only when it ceases to be a myth and is released into the space of reason, into the space of alternatives and possibilities and as-ifs. What structure does he think Kant is operating from?
Thus, in all of Campbell’s presentations, he takes two tacks: he first lays out the concrete and literal way that 99.9 percent of believers take the myth. And here he is not often kind. He clearly despises concrete mythic-beliefs (“On the popular side, in their popular cults, the Indians [i.e., from India] are, of course, as positivistic in their readings of their myths as any farmer in Tennessee, rabbi in the Bronx, or pope in Rome. Krishna actually danced in manifold rapture with the gopis, and the Buddha walked on water”).
Instead of seeing the concrete myth as the only way that myth can be believed as that stage of development, and thus being perfectly adequate and noble (if partial and limited) for that stage, he takes the concrete belief in magic and myth as a “perversion,” as if this structure actually had a choice for which it could be condemned. He is in fact denigrating an entire series of developmental stages that represented extraordinary advances in their own ways, and were no more a perversion of spiritual development than an acorn is a perversion of an oak. But he must condemn these stages per se because he judges them against his elevated version of “real mythology,” whereas I am not condemning these stages per se because they were the real McCoy, they genuine item: they were doing exactly what is appropriate and definitive and stage-specific for mythology.
Second, Campbell then suggests the ways that those very few (who do not take the myth literally) have used to transcend the myth (and are therefore, I would like to point out, no longer doing anything that could remotely be called mythology). This involves first and foremost, for Campbell, holding the myth in the space of “as if”; that is, holding it in the space of reason (with the possibility of then going further and transcending reason as well).
And here Campbell commits the classic pre/trans fallacy. Since the prerational realms are definitely mythological, then Campbell wants to call the transrational realms “mythological” as well, since they too are nonrational (and since he wants to salvage mythology with a field promotion). So on one side he lumps together all nonrational endeavors (from primitive mythology to highly developed contemplative encounters), and on the other side—the “bad” side—he dumps poor reason, even as he himself is in fact (and rather hypocritically) using the space of reason to salvage his myths. “Mythological symbols touch and exhilarate centers of life beyond the reach of the vocabularies of reason.” There is indeed a “beyond reason,” but how much more so is it “beyond mythology.”
And, in fact, it is not “in praise of mythology,” but rather “beyond mythology,” to which the entire corpus of Campbell’s work inexorably points. In surveying his truly magnificent, four-volume masterpiece, The Masks Of God, Campbell leaves us with one final message: As any ethnologist, archaeologist, or historian would observe, the myths of the differing civilizations have sensibly varied throughout the centuries and broad reaches of mankind’s residence in the world, indeed to such a degree that the “virtue” of one mythology has often been the “vice” of another, and the heaven of the one the other’s hell. Moreover, with the old horizons now gone that formerly separated and protected the various culture worlds and their pantheons, a veritable Gotterdammerung has flung its flames across the universe. Communities that were once comfortable in the consciousness of their own mythologically guaranteed godliness find, abruptly, that they are devils in the eyes of their neighbors.
The ethnocentric and divisive nature of mythology is fully conceded. Lamenting this state of affairs (even though it is inherent in mythology as mythic-imperialism), Campbell concludes that some more global understanding “of a broader, deeper kind than anything envisioned anywhere in the past is now required.” The hope, indeed, lies beyond parochial and provincial mythology. And beyond mythology is global and universal reason (and then beyond reason…).
Nowhere is Campbell’s pre/trans confusion more painfully obvious than in his attempt to displace or deconstruct rational science (and thus simultaneously elevate mythology). And again, the embarrassment is established by his own premises and his own logical conclusions, which a mind as fine as Campbell’s can ignore only by prejudice.
Since Campbell’s aim is to prove that reason and science are in no sense “higher” than “real” mythology, he begins fist by pointing out that even the worldview of science is actually a mythology. If he can do this successfully, he will have put science and mythology on the same level. He proceeds to outline four factors (or four functions) that all mythologies have in common. The first he calls “metaphysical,” whose function is “to reconcile waking consciousness to…the universe as it is.” The second function is to provide “an interpretative total image of the same,” an interpretive cosmology. The third is sociocultural, “the validation and maintenance of a social order.” And the fourth is psychological, or individual orientation and integration…
And, of course, defined that way, science (or the scientific worldview) does indeed perform all four functions of mythology. But then, adds Campbell, science of course does some other things that mythology per se does not, such as its spectacular discoveries in evolution, medicine, engineering, and so forth.
In other words, rationality/science does everything myth does, plus something extra.
That, of course, is the definition of a higher stage. Campbell recognizes that mythology originates from a particular stage of human development (which he happily concedes is the childhood of men and women), and then he also defines it as what all stages have in common…By this sneaky dual definition he hopes to be able both to concede mythology’s childishness and run it through all higher stages, thus allowing him not only to salvage mythology (since it is now what all stages have in common) but also to push it all the way to infinity, all the way to transpersonal Spirit.
But the four functions…are not a definition of mythology’s functions; they are a definition of evolution’s functions…Which only leaves Campbell’s other definition: mythology can only rightly lay claim to the childhood of men and women.
And running that definition to infinity simply results in the infantalization of Spirit. Campbell’s dual definitions actually undo each other, and point instead to the inexorable conclusion: beyond mythology is reason, and beyond both is Spirit…
The capacity to go beyond and look at rationality results in going beyond rationality, and the first stage of that going-beyond is vision-logic. If you are aware of being rational, what is the nature of that awareness, since it is now bigger than rationality? To be aware of rationality is no longer to have only rationality, yes?
Numerous psychologists (Bruner, Flavell, Arieti, Cowan, Kramer, Commons, Basseches, Arlin, etc.) have pointed out that there is much evidence for a stage beyond Piaget’s formal operational. It has been called “dialectical,” “integrative,” “creative synthetic,” “integral-aperspectival,” “postformal,” and so forth. I…am using the term vision-logic or network logic. But the conclusions are all essentially the same: “Piaget’s formal operational is considered to be a problem-solving stage. But beyond this stage are the truly creative scientists and thinkers who define important problems and ask important questions. While Piaget’s form model is adequate to describe the cognitive structures of adolescents and competent adults, it is not adequate to describe the towering intellect of Nobel laureates, great statesmen and stateswomen, poets, and so on.
True enough. But I would like to give a different emphasis to this structure, for while very few people might actually gain the “towering status of a Nobel laureate,” the space of vision-logic (its worldspace or worldview) is available for any who wish to continue their growth and development. In other words, to progress through the various stages of growth does not mean
that one has to extraordinarily master each and every stage, and demonstrate a genius comprehension at that stage before one can progress beyond it. This would be like saying that no individuals can move beyond the oral stage until they become gourmet cooks.
It is not necessary to be able to articulate the characteristics of a particular stage (children progress beyond preop without ever being able to define it). It is merely necessary to develop an adequate competence at that stage, in order for it to serve just fine as a platform for the transcendence to the next stage. In order to transcend the verbal, it is not necessary to first become Shakespeare.
Likewise, in order to develop formal rationality, it is not necessary to learn calculus and propositional logic. Every time you imagine different outcomes, every time you see a possible future different from today’s, every time you dream the dream of what might be, you are using formal operational awareness. And from that platform you can enter vision-logic, which means not that you have to become a Hegel or a Whitehead in order to advance, but only that you have to think globally, which is no so hard at all. Those who will master this stage, or any stage for that matter, will always be relatively few; but all are invited to pass through.
Because vision-logic transcends but includes formal operational, it completes and brings to fruition many of the trends begun with universal rationality itself (which is why many writers refer to vision-logic as “mature reason” or “dialectical reason” or “synthetic reason,” and so on.
In other words, rationality is global, vision-logic is more global. Take Habermas, for example…Formal operational rationality establishes the postconventional stages of, first, “civil liberties” or “legal freedom” for “all those bound by law,” and then, in a more developed stage, it demands not just legal freedom but also “moral freedom” for “all human as private persons.” But even further, mature and communicative reason (our vision-logic) demands both “moral and political freedom” for “all human beings as members of a world society.” Thus, where rationality began the worldcentric orientation of universal pluralism, vision-logic brings it to a mature fruition by demanding not just legal and moral freedom, but legal and moral and political freedom…
In just the same way, ecological and relational awareness, which started to emerge in formal operational, come to a major fruition with vision-logic and centauric worldview. For, in beginning to differentiate from rationality (look at it, operate upon it), vision-logic can, for the first time, integrate reason with its predecessors, including life and matter…
In other words…centauric vision-logic can integrate physiosphere, biosphere, and noosphere in its own compound individuality (and this is…the next major leading-edge global transformation, even though most of the “work yet to be done” is still getting the globe up to decentered universal pluralism in the first place).
This overall integration (physiosphere, biosphere, and noosphere, or matter, body, mind) is borne out, for example, by the researches of Broughton, Loevinger, Selman, Maslow, and others. As only one example…we can take the work of John Broughton.
As usual, this new centauric stage possesses not just a new cognitive capacity (vision-logic)—it also involves a new sense of identity (centauric), with new desires, new drives, new needs, new perceptions, new terrors, and new pathologies: it is a new and higher self in a new and wider world of others. And Broughton has very carefully mapped out the developmental stages of self and knowing that lead up to this new centauric mode of being-in-the-world.
To simplify considerably, Broughton asked individuals from preschool age to early adulthood: what or where is your self?
Since this was a verbal study, Broughton began with the late preop child (magic-mythic), which he calls level zero. At this stage, children uniformly reply that the self is “inside” and reality is “outside.” Thoughts are not distinguished from their objects (still magical adherences).
At level one, still in the late preop stage, children believe that the self is identified with the physical body, but the mind controls the self and can tell it what to do, so it is the mind that moves the body. The relation of mind to body is one of authority: the mind controls the self and can tell it what to do, so it is the mind that moves the body. The relation of mind to body is one of authority: the mind is the big person and the body is a little person (i.e., mind and body are slowly differentiating). Likewise, thoughts are distinguished from objects, but there is no distinction between reality and appearance (“naïve realism”).
Level two occurs at about ages seven to twelve years (conop). Mind and body are initially differentiated at this level (completion of fulcrum three) and the child speaks of the self as being, not a body, but a person (a social role or persona, fulcrum four), and the person includes both mind and body. Although thoughts and things are distinguished, there is still a strong personalistic flavor to knowledge (remnants of egocentrism), so facts and personal opinions are not easily differentiated.
At level three, occurring around eleven to seventeen years (early formop), “the social personality or role is seen as false outer appearance, different from the true inner self.” Here we see clearly the differentiation of the self (the rational ego) from its embeddedness in sociocentric roles—the emergence of a new identity or relative autonomy which is aware of, and thus transcends or disidentifies from, overt social roles. “The self is what the person’s nature normally is: it is a kind of essence that remains itself over changes in mental contents.”
At level four, or late formop, the person becomes capable of hypothetic deductive awareness (what if, as if), and reality is conceived in terms of relativity and interrelationships…The self is viewed as a postulate “lending unity and integrity to personality, experience, and behavior” (this is the “mature ego”).
But, and this is very telling, development can take a cynical turn at this stage. Instead of being the principle lending unity and integrity to experience and behavior, the self is simply identified with experience and behavior. In the cynical behavioristic turn of this stage, “the person is a cybernetic guided to fulfillment of its material wants [quick note—“the essential goal of cybernetics is to understand and define the functions and processes of systems that have goals and that participate in circular, causal chains that move from action to sensing to comparison with desired goal, and again to action.” Wikipedia]. At this level, radical emphasis on seeing everything within a relativistic or subjective frame of reference leaves a person close to a solipsistic position.”                                                                                                          The world is seen as a great relativistic cybernetic system so “holistic” that it leaves no room for the actual subject in the objective framework. The self therefore hovers above reality, disengaged, disenchanted, disembodied. It is “close to a solipsistic position”: hyperagency cut off from all communions. And this, as we have seen, is essentially the fundamental
Enlightenment paradigm: a perfectly holistic world that leaves a perfectly atomistic self.
A transcendental self can bond with other transcendental selves, whereas a merely enlightened empirical self disappears into the empirical web and interlocking order, never to be heard from again. (No strand in the web is ever or can be ever aware of the whole web; if it could, then it would cease to be merely a strand. This is not allowed by systems theory, which is why, as Habermas demonstrated, systems theory always ends of isolationist and egocentric, or “solipsistic.”)
But for a transcendental self to emerge, it has first to differentiate from the merely empirical self, and thus we find, with Broughton: “At level five the self as observer is distinguishing from the self-concept as known.” In other words, something resembling a pure observing Self (a transcendental Witness or Atman, which we will investigate in a moment) is beginning to be clearly distinguished from the empirical ego or objective self—it is new interiority, a new going within that goes beyond, a new emergence that transcends but includes the empirical ego. This beginning transcendence of the ego we are, of course, calling the centaur (the beginning of fulcrum six, or the sixth major differentiation that we have seen so far in the development of consciousness). This is the realm of vision-logic leading to centauric integration, which is why at this stage, Broughton found that “reality is defined by the coherence of the interpretive framework.”
This integrative stage comes to fruition at Broughton’s last major level (late centauric), where “mind and body are both experiences of an integrated self,” which is the phrase I have most often used to define the centauric or bodymind-integrated self. Precisely because awareness has differentiated from (or disidentified from, or transcended) an exclusive identification with body, persona, ego, and mind, it can now integrate them in a unified fashion, in a unified fashion, in a new and higher holon with each of them as junior partners. Physiosphere, bioshpere, noosphere—exclusively identified with none of them, therefore capable of integrating all of them.
But everything is not sweetness and light with the centaur. As always, new and higher capacities bring with them the potential for new and higher pathologies. As vision-logic adds up all the possibilities given to the mind’s eye, it eventually reaches a dismal conclusion: personal life is a brief spark in the cosmic void. Not matter how wonderful it all might be now, we are still going to die: dread, as Heidegger said, is the authentic response to the existential (centauric) being, a dread that calls us back from self-forgetting to self-presence, a dread that seizes not this or that part of me (body or persona or ego or mind), but rather the totality of my being-in-the-world. When I authentically see my life, I see its ending, I see its death; and I see why that my “other selves,” my ego, my personas, were all sustained by inauthenticity, by an avoidance of the awareness of lonely death.
A profound existential malaise can set in—the characteristic pathology of this stage (fulcrum six). No longer protected by anthropocentric gods and goddesses, reason gone flat in its happy capacity to explain away the Mystery, not yet delivered into the hands of the superconsious—we stare blankly into that dark and gloomy night, which will very shortly swallow us up as surely as it once spat us forth. Tolstoy:
The question, which in my fiftieth year had brought me to the notion
of suicide, was the simplest of all questions, lying in every soul of every man: “What will come from what I am doing now, and may do tomorrow?
What will come from my whole life?” Otherwise expressed—“Why should I live? Why should I wish for anything?” Again, in other words, “Is there any meaning in my life which will not be destroyed by the
Inevitable death awaiting me?”
That question would never arise to the magical structure; that structure has abundant, even exorbitant meaning because the universe centers always on it, was made for it, caters to it daily: every raindrop soothes its soul because every confirming drop reassures it of its cosmocentricity: the great spirit wraps it in the wind and whispers to it always, I exist for you.
That question would never arise to the mythic-believer: the soul exists only for its God, a God that, by a happy coincidence, will save this soul eternally if it professes belief in this God: a mutual admiration society destined for a bad infinity. A crises of faith and meaning is impossible from within this circle ( a crises occurs only when this soul suspects this God).
That question would never beset the happy rationalist, who long ago became a happy rationalist by deciding never to ask such questions again, and then forgetting, rendering unconscious, this question, and sustaining the unconscious by ridiculing those who ask it.
No, that question arises from a self that knows too much, sees too much, feels too much. The consolations are gone; the skull will grin in at the banquet; it can no longer tranquilize itself with the trivial. From the depths, it cries out to gods no longer there, searches for a meaning not yet disclosed, still to be incarnated. Its very agony is worth a million happy magics and a thousand believing myths, and yet its only consolation is it unrelenting pain—a pain, a dread, an emptiness that feels beyond the comforts and distractions of the body, the persona, the ego, looks bravely into the face of the Void, and can no longer explain away either the Mystery or the Terror. It is a soul that is much too awake. It is a soul on the brink of the transpersonal.
The Transpersonal Domains
We have repeatedly seen that the problems of one stage are only “defused” at the next stage, and thus the only cure for existential angst is the transcendence of the existential condition, that is, the transcendence of the centaur, negating and preserving it in a yet higher and wider awareness. For we are here beginning to pass out of the noosphere and into the theosphere, into the transpersonal domains, the domains not just of the self-conscious but the superconscious.
A great number of issues need to be clarified as we follow evolution…into the higher or deeper forms of transpersonal unfolding.
First and foremost, if this higher unfolding is to be called “religious” or “spiritual,” it is a very far cry from what is ordinarily meant by those terms. We have…painstakingly (reviewed) the earlier developments of the archaic, magic, and mythic structures (which are usually associated with the world’s great religions), precisely because those structures are what transpersonal and contemplative development is not. And here we can definitely agree with Campbell: if 99.9 percent of people want to call magic and mythic “real religion,” then so be it for them (that is a legitimate use); but that is not what the world’s greatest yogis, saints, and sages mean by mystical or “really religious” development, and in any event is not what I have in mind.
Campbell, however, is quite right that a very, very few individuals, during the magic and mythic and rational eras, were indeed able to go beyond magic, beyond mythic, and beyond rational—into the transrational and transpersonal domains. And even if their teachings (such as those of Buddha, Christ, Pantanjali, Padmasambhava, Rumi and Chih-I) were snapped up by the masses and translated downward into magic and mythic and egoic terms—“the salvation of the individual soul”—that is not what their teachings clearly and even blatantly stated, no did they intentionally lend support to such endeavors. Their teachings were about the release from individuality, and not about its everlasting perpetuation, a grotesque notion that was equated flat-out with hell or samsara.
Their teachings, and their contemplative endeavors, were (and are) transrational through and through. That is, all of the contemplative traditions aim at going within and beyond reason, and they all start with reason, start with the notion that truth is to be established by evidence, that truth is the result of experimental methods, that truth is to be tested in the laboratory of personal experience, that these truths are open to all those who wish to try the experiment and thus disclose for themselves the truth or falsity of the spiritual claims—and that dogmas or given beliefs are precisely what hinder the emergence of deeper truths and wider visions.
Thus, each of these spiritual or transpersonal endeavors…claims that there exist higher domains of awareness, embrace, love, identity, reality, self, and truth. But these claims are not dogmatic; they are not believed in merely because an authority proclaimed them, or because sociocentric tradition hands them down, or because salvation depends upon being a “true believer.” Rather, the claims about these higher domains are a conclusion based on hundreds of years of experimental introspection and communal verification. False claims are rejected on the basis of consensual evidence, and further evidence is used to adjust and fine-tune the experimental conclusions.
These spiritual endeavors, in other words, are scientific in any meaningful sense of the word, and the systematic presentations of these endeavors follow precisely those of any reconstructive science.
Objections To The Transpersonal
The common objections to these contemplative sciences are not very compelling. The most typical objection is that these mystical states are private and interior and cannot be publicly validated; they are “merely subjective.”
This simply is not true; or rather, if it is true, then it applies to any and all nonempirical endeavors, from mathematics to literature to linguistics to psychoanalysis to historical interpretation. Nobody has ever seen, “out there”
in the “sensory world,” the square root of a negative one. That is a mathematical symbol seen only inwardly, “privately,” with the mind’s eye. Yet a community of trained mathematicians know exactly what that symbol means, and they can share that symbol easily in intersubjective awareness, and they can confirm or reject the proper and consistent uses of that symbol. Just so, the “private” experiences of contemplative scientists can be shared with a community of trained contemplatives, grounded in a common and shared experience, and open to confirmation or rebuttal based on public evidence…
There is, of course, one proviso: the experimenter must, in his or her own case, have developed the requisite cognitive tools. If, for example, we want to investigate concrete operational thought, a community of those who have only developed to the preoperational level will not do. If you take a preop child, and in front of the child pour the water from a short fat glass into a tall thin glass, the child will tell you that the tall glass has more water. If you say, no, there is the same amount of water in both glasses, because you saw me pour the same water from one glass to the other, the child will have no idea what you’re talking about. “No, the tall glass has more water.” No matter how many times you pour the water back and forth between the two glasses, the child will deny they have the same amount of water…The preop child is immersed in a world that includes conop realities, is drenched in those realities, and yet cannot “see” them: they are all “otherworldly.”

The Mental LIfe Of Plants


The Mental Life of Plants and Worms, Among Others
Oliver Sacks

April 24, 2014 Issue
The Formation of Vegetable Mould through the Action of Worms: with Observations on Their Habits
by Charles Darwin
London: John Murray (1881)

Charles Darwin’s last book, published in 1881, was a study of the humble earthworm. His main theme—expressed in the title, The Formation of Vegetable Mould through the Action of Worms—was the immense power of worms, in vast numbers and over millions of years, to till the soil and change the face of the earth. But his opening chapters are devoted more simply to the “habits” of worms.
Worms can distinguish between light and dark, and they generally stay underground, safe from predators, during daylight hours. They have no ears, but if they are deaf to aerial vibration, they are exceedingly sensitive to vibrations conducted through the earth, as might be generated by the footsteps of approaching animals. All of these sensations, Darwin noted, are transmitted to collections of nerve cells (he called them “the cerebral ganglia”) in the worm’s head.
When a worm is suddenly illuminated,” Darwin wrote, it “dashes like a rabbit into its burrow.” He noted that he was “at first led to look at the action as a reflex one,” but then observed that this behavior could be modified—for instance, when a worm was otherwise engaged, it showed no withdrawal with sudden exposure to light.
For Darwin, the ability to modulate responses indicated “the presence of a mind of some kind.” He also wrote of the “mental qualities” of worms in relation to their plugging up their burrows, noting that “if worms are able to judge…having drawn an object close to the mouths of their burrows, how best to drag it in, they must acquire some notion of its general shape.” This moved him to argue that worms “deserve to be called intelligent, for they then act in nearly the same manner as a man under similar circumstances.”
As a boy, I played with the earthworms in our garden (and later used them in research projects), but my true love was for the seashore, and especially tidal pools, for we nearly always took our summer holidays at the seaside. This early, lyrical feeling for the beauty of simple sea creatures became more scientific under the influence of a biology teacher at school and our annual visits with him to the Marine Station at Millport in southwest Scotland, where we could investigate the immense range of invertebrate animals on the seashores of Cumbrae. I was so excited by these Millport visits that I thought I would like to become a marine biologist myself.
f Darwin’s book on earthworms was a favorite of mine, so too was George John Romanes’s 1885 book Jelly-Fish, Star-Fish, and Sea-Urchins: Being a Research on Primitive Nervous Systems, with its simple, fascinating experiments and beautiful illustrations. For Romanes, Darwin’s young friend and student, the seashore and its fauna were to be passionate and lifelong interests, and his aim above all was to investigate what he regarded as the behavioral manifestations of “mind” in these creatures.
I was charmed by Romanes’s personal style. (His studies of invertebrate minds and nervous systems were most happily pursued, he wrote, in “a laboratory set up upon the sea-beach…a neat little wooden workshop thrown open to the sea-breezes.”) But it was clear that correlating the neural and the behavioral was at the heart of Romanes’s enterprise. He spoke of his work as “comparative psychology,” and saw it as analogous to comparative anatomy.
Louis Agassiz had shown, as early as 1850, that the jellyfish Bougainvillea had a substantial nervous system, and by 1883 Romanes demonstrated its individual nerve cells (there are about a thousand). By simple experiments—cutting certain nerves, making incisions in the bell, or looking at isolated slices of tissue—he showed that jellyfish employed both autonomous, local mechanisms (dependent on nerve “nets”) and centrally coordinated activities through the circular “brain” that ran along the margins of the bell.
By 1883, Romanes was able to include drawings of individual nerve cells and clusters of nerve cells, or ganglia, in his book Mental Evolution in Animals. “Throughout the animal kingdom,” Romanes wrote,
nerve tissue is invariably present in all species whose zoological position is not below that of the Hydrozoa. The lowest animals in which it has hitherto been detected are the Medusae, or jelly-fishes, and from them upwards its occurrence is, as I have said, invariable. Wherever it does occur its fundamental structure is very much the same, so that whether we meet with nerve-tissue in a jelly-fish, an oyster, an insect, a bird, or a man, we have no difficulty in recognizing its structural units as everywhere more or less similar.
At the same time that Romanes was vivisecting jellyfish and starfish in his seaside laboratory, the young Sigmund Freud, already a passionate Darwinian, was working in the lab of Ernst Brücke, a physiologist in Vienna. His special concern was to compare the nerve cells of vertebrates and invertebrates, in particular those of a very primitive vertebrate (Petromyzon, a lamprey) with those of an invertebrate (a crayfish). While it was widely held at the time that the nerve elements in invertebrate nervous systems were radically different from those of vertebrate ones, Freud was able to show and illustrate, in meticulous, beautiful drawings, that the nerve cells in crayfish were basically similar to those of lampreys—or human beings.
And he grasped, as no one had before, that the nerve cell body and its processes—dendrites and axons—constituted the basic building blocks and the signaling units of the nervous system. Eric Kandel, in his book In Search of Memory: The Emergence of a New Science of Mind (2006), speculates that if Freud had stayed in basic research instead of going into medicine, perhaps he would be known today as “a co-founder of the neuron doctrine, instead of as the father of psychoanalysis.”
Although neurons may differ in shape and size, they are essentially the same from the most primitive animal life to the most advanced. It is their number and organization that differ: we have a hundred billion nerve cells, while a jellyfish has a thousand. But their status as cells capable of rapid and repetitive firing is essentially the same.
The crucial role of synapses—the junctions between neurons where nerve impulses can be modulated, giving organisms flexibility and a whole range of behaviors—was clarified only at the close of the nineteenth century by the great Spanish anatomist Santiago Ramón y Cajal, who looked at the nervous systems of many vertebrates and invertebrates, and by C.S. Sherrington in England (it was Sherrington who coined the word “synapse” and showed that synapses could be excitatory or inhibitory in function).
In the 1880s, however, despite Agassiz’s and Romanes’s work, there was still a general feeling that jellyfish were little more than passively floating masses of tentacles ready to sting and ingest whatever came their way, little more than a sort of floating marine sundew.
But jellyfish are hardly passive. They pulsate rhythmically, contracting every part of their bell simultaneously, and this requires a central pacemaker system that sets off each pulse. Jellyfish can change direction and depth, and many have a “fishing” behavior that involves turning upside down for a minute, spreading their tentacles like a net, and then righting themselves, which they do by virtue of eight gravity-sensing balance organs. (If these are removed, the jellyfish is disoriented and can no longer control its position in the water.) If bitten by a fish, or otherwise threatened, jellyfish have an escape strategy—a series of rapid, powerful pulsations of the bell—that shoots them out of harm’s way; special, oversized (and therefore rapidly responding) neurons are activated at such times.
Of special interest and infamous reputation among divers is the box jellyfish (Cubomedusae)—one of the most primitive animals to have fully developed image-forming eyes, not so different from our own. The biologist Tim Flannery, in an article in these pages, writes of box jellyfish:
They are active hunters of medium-sized fish and crustaceans, and can move at up to twenty-one feet per minute. They are also the only jellyfish with eyes that are quite sophisticated, containing retinas, corneas, and lenses. And they have brains, which are capable of learning, memory, and guiding complex behaviors.
We and all higher animals are bilaterally symmetrical, have a front end (a head) containing a brain, and a preferred direction of movement (forward). The jellyfish nervous system, like the animal itself, is radially symmetrical and may seem less sophisticated than a mammalian brain, but it has every right to be considered a brain, generating, as it does, complex adaptive behaviors and coordinating all the animal’s sensory and motor mechanisms. Whether we can speak of a “mind” here (as Darwin does in regard to earthworms) depends on how one defines “mind.”
We all distinguish between plants and animals. We understand that plants, in general, are immobile, rooted in the ground; they spread their green leaves to the heavens and feed on sunlight and soil. We understand that animals, in contrast, are mobile, moving from place to place, foraging or hunting for food; they have easily recognized behaviors of various sorts. Plants and animals have evolved along two profoundly different paths (fungi have yet another), and they are wholly different in their forms and modes of life.
And yet, Darwin insisted, they were closer than one might think. He wrote a series of botanical books, culminating in The Power of Movement in Plants (1880), just before his book on earthworms. He thought the powers of movement, and especially of detecting and catching prey, in the insectivorous plants so remarkable that, in a letter to the botanist Asa Gray, he referred to Drosera, the sundew, only half-jokingly as not only a wonderful plant but “a most sagacious animal.”
Darwin was reinforced in this notion by the demonstration that insect-eating plants made use of electrical currents to move, just as animals did—that there was “plant electricity” as well as “animal electricity.” But “plant electricity” moves slowly, roughly an inch a second, as one can see by watching the leaflets of the sensitive plant (Mimosa pudica) closing one by one along a leaf that is touched. “Animal electricity,” conducted by nerves, moves roughly a thousand times faster.2
Signaling between cells depends on electrochemical changes, the flow of electrically charged atoms (ions), in and out of cells via special, highly selective molecular pores or “channels.” These ion flows cause electrical currents, impulses—action potentials—that are transmitted (directly or indirectly) from one cell to another, in both plants and animals.
Plants depend largely on calcium ion channels, which suit their relatively slow lives perfectly. As Daniel Chamovitz argues in his book What a Plant Knows (2012), plants are capable of registering what we would call sights, sounds, tactile signals, and much more. Plants know what to do, and they “remember.” But without neurons, plants do not learn in the same way that animals do; instead they rely on a vast arsenal of different chemicals and what Darwin termed “devices.” The blueprints for these must all be encoded in the plant’s genome, and indeed plant genomes are often larger than our own.
The calcium ion channels that plants rely on do not support rapid or repetitive signaling between cells; once a plant action potential is generated, it cannot be repeated at a fast enough rate to allow, for example, the speed with which a worm “dashes…into its burrow.” Speed requires ions and ion channels that can open and close in a matter of milliseconds, allowing hundreds of action potentials to be generated in a second. The magic ions, here, are sodium and potassium ions, which enabled the development of rapidly reacting muscle cells, nerve cells, and neuromodulation at synapses. These made possible organisms that could learn, profit by experience, judge, act, and finally think.
This new form of life—animal life—emerging perhaps 600 million years ago conferred great advantages, and transformed populations rapidly. In the so-called Cambrian explosion (datable with remarkable precision to 542 million years ago), a dozen or more new phyla, each with very different body plans, arose within the space of a million years or less—a geological eye-blink. The once peaceful pre-Cambrian seas were transformed into a jungle of hunters and hunted, newly mobile. And while some animals (like sponges) lost their nerve cells and regressed to a vegetative life, others, especially predators, evolved increasingly sophisticated sense organs, memories, and minds.
It is fascinating to think of Darwin, Romanes, and other biologists of their time searching for “mind,” “mental processes,” “intelligence,” even “consciousness” in primitive animals like jellyfish, and even in protozoa. A few decades afterward, radical behaviorism would come to dominate the scene, denying reality to what was not objectively demonstrable, denying in particular any inner processes between stimulus and response, deeming these as irrelevant or at least beyond the reach of scientific study.
Such a restriction or reduction indeed facilitated studies of stimulation and response, both with and without “conditioning,” and it was Pavlov’s famous studies of dogs that formalized—as “sensitization” and “habituation”—what Darwin had observed in his worms.
As Konrad Lorenz wrote in The Foundations of Ethology, “an earthworm [that] has just avoided being eaten by a blackbird…is indeed well-advised to respond with a considerably lowered threshold to similar stimuli, because it is almost certain that the bird will still be nearby for the next few seconds.” This lowering of threshold, or sensitization, is an elementary form of learning, even though it is nonassociative and relatively short-lived. Correspondingly, a diminution of response, or habituation, occurs when there is a repeated but insignificant stimulus—something to be ignored.
It was shown within a few years of Darwin’s death that even single-cell organisms like protozoa could exhibit a range of adaptive responses. In particular, Herbert Spencer Jennings showed that the tiny, stalked, trumpet-shaped unicellular organism Stentor employs a repertoire of at least five different responses to being touched, before finally detaching itself to find a new site if these basic responses are ineffective. But if it is touched again, it will skip the intermediate steps and immediately take off for another site. It has become sensitized to noxious stimuli, or, to use more familiar terms, it “remembers” its unpleasant experience and has learned from it (though the memory lasts only a few minutes). If, conversely, Stentor is exposed to a series of very gentle touches, it soon ceases to respond to these at all—it has habituated.
Jennings described his work with sensitization and habituation in organisms like Paramecium and Stentor in his 1906 book Behavior of the Lower Organisms. Although he was careful to avoid any subjective, mentalistic language in his description of protozoan behaviors, he did include an astonishing chapter at the end of his book on the relation of observable behavior to “mind.”
He felt that we humans are reluctant to attribute any qualities of mind to protozoa because they are so small:
The writer is thoroughly convinced, after long study of the behaviour of this organism, that if Amoeba were a large animal, so as to come in the everyday experience of human beings, its behaviour would at once call forth the attribution to it of states of pleasure and pain, of hunger, desire, and the like, on precisely the same basis as we attribute these things to the dog.
Jennings’s vision of a highly sensitive, dog-size Amoeba is almost cartoonishly the opposite of Descartes’s notion of dogs as so devoid of feelings that one could vivisect them without compunction, taking their cries as purely “reflex” reactions of a quasi-mechanical kind.
Sensitization and habituation are crucial for the survival of all living organisms. These elementary forms of learning are short-lived—a few minutes at most—in protozoa and plants; longer-lived forms require a nervous system.
While behavioral studies flourished, there was almost no attention paid to the cellular basis of behavior—the exact role of nerve cells and their synapses. Investigations in mammals—involving, for example, the hippocampal or memory systems in rats—presented almost insuperable technical difficulties, due to the tiny size and extreme density of neurons (there were difficulties, moreover, even if one could record electrical activity from a single cell, in keeping it alive and fully functioning for the duration of protracted experiments).
Faced with such difficulties in his anatomical studies in the early twentieth century, Ramón y Cajal—the first and greatest microanatomist of the nervous system—had turned to study simpler systems: those of young or fetal animals, and those of invertebrates (insects, crustaceans, cephalopods, etc.). For similar reasons, Eric Kandel, when he embarked in the 1960s on a study of the cellular basis of memory and learning, sought an animal with a simpler and more accessible nervous system. He settled on the giant sea snail Aplysia, which has 20,000 or so neurons, distributed in ten or so ganglia of about 2,000 neurons apiece. It also has particularly large neurons—some even visible to the naked eye—connected with one another in fixed anatomical circuits.
That Aplysia might be considered too low a form of life for studies of memory did not discountenance Kandel, despite some skepticism from his colleagues—any more than it had discountenanced Darwin when he spoke of the “mental qualities” of earthworms. “I was beginning to think like a biologist,” Kandel writes, recalling his decision to work with Aplysia. “I appreciated that all animals have some form of mental life that reflects the architecture of their nervous system.”
As Darwin had looked at an escape reflex in worms and how it might be facilitated or inhibited in different circumstances, Kandel looked at a protective reflex in Aplysia, the withdrawal of its exposed gill to safety, and the modulation of this response. Recording from (and sometimes stimulating) the nerve cells and synapses in the abdominal ganglion that governed these responses, he was able to show that relatively short-term memory and learning—as involved in habituation and sensitization—depended on functional changes in synapses; but longer-term memory, which might last several months, went with structural changes in the synapses. (In neither case was there any change in the actual circuits.)
As new technologies and concepts emerged in the 1970s, Kandel and his colleagues were able to complement these electrophysiological studies of memory and learning with chemical ones: “We wanted to penetrate the molecular biology of a mental process, to know exactly what molecules are responsible for short-term memory.” This entailed, in particular, studies of the ion channels and neurotransmitters involved in synaptic functions—monumental work that earned Kandel a Nobel Prize.
Where Aplysia has only 20,000 neurons distributed in ganglia throughout its body, an insect may have up to a million nerve cells, all concentrated in one brain, and despite its tiny size may be capable of extraordinary cognitive feats. Thus bees are expert in recognizing different colors, smells, and geometric shapes presented in a laboratory setting, as well as systematic transformations of these. And of course, they show superb expertise in the wild or in our gardens, where they recognize not only the patterns and smells and colors of flowers, but can remember their locations and communicate these to their fellow bees.
It has even been shown, in a highly social species of paper wasp, that individuals can learn and recognize the faces of other wasps. Such face learning has hitherto been described only in mammals; it is fascinating that a cognitive power so specific can be present in insects as well.
We often think of insects as tiny automata—robots with everything built-in and programmed. But it is increasingly evident that insects can remember, learn, think, and communicate in quite rich and unexpected ways. Much of this, doubtless, is built-in—but much, too, seems to depend on individual experience.
Whatever the case with insects, there is an altogether different situation with those geniuses among invertebrates, the cephalopods, consisting of octopuses, cuttlefish, and squid. Here, as a start, the nervous system is much larger—an octopus may have half a billion nerve cells distributed between its brain and its “arms” (a mouse, by comparison, has only 75 to 100 million). There is a remarkable degree of organization in the octopus brain, with dozens of functionally distinct lobes in the brain and similarities to the learning and memory systems of mammals.
Cephalopods are not only easily trained to discriminate test shapes and objects, but some reportedly can learn by observation, a power otherwise confined to certain birds and mammals. They have remarkable powers of camouflage, and can signal complex emotions and intentions by changing their skin colors, patterns and textures.
Darwin noted in The Voyage of the Beagle how an octopus in a tidal pool seemed to interact with him, by turns watchful, curious, and even playful. Octopuses can be domesticated to some extent, and their keepers often empathize with them, feeling some sense of mental and emotional proximity. Whether one can use the “C” word—consciousness—in regard to cephalopods can be argued all ways. But if one allows that a dog may have consciousness of an individual and significant sort, one has to allow it for an octopus, too.
Nature has employed at least two very different ways of making a brain—indeed, there are almost as many ways as there are phyla in the animal kingdom. Mind, to varying degrees, has arisen or is embodied in all of these, despite the profound biological gulf that separates them from one other, and us from them.4

What Is Enlightenment?
Foucault, Michel. “What is Enlightenment?” In The Foucault Reader, edited by Paul Rabinow, pp. 32-50. New York: Pantheon Books, 1984
What is Enlightenment?

Today when a periodical asks its readers a question, it does so in order to collect opinions on some subject about which everyone has an opinion already; there is not much likelihood of learning anything new. In the eighteenth century, editors preferred to question the public on problems that did not yet have solutions. I don’t know whether or not that practice was more effective; it was unquestionably more entertaining.
In any event, in line with this custom, in November 1784 a German periodical, Berlinische Monatschrift published a response to the question: Was ist Aufklärung? And the respondent was Kant.
A minor text, perhaps. But it seems to me that it marks the discreet entrance into the history of thought of a question that modern philosophy has not been capable of answering, but that it has never managed to get rid of, either. And one that has been repeated in various forms for two centuries now. From Hegel through Nietzsche or Max Weber to Horkheimer or Habermas, hardly any philosophy has failed to confront this same question, directly or indirectly. What, then, is this event that is called the Aufklärung and that has determined, at least in part, what we are, what we think, and what we do today? Let us imagine that the Berlinische Monatschrift still exists and that it is asking its readers the question: What is modern philosophy? Perhaps we could respond with an echo: modern philosophy is the philosophy that is attempting to answer the question raised so imprudently two centuries ago: Was ist Aufklärung?
Let us linger a few moments over Kant’s text. It merits attention for several reasons.
1. To this same question, Moses Mendelssohn had also replied in the same journal, just two months earlier. But Kant had not seen Mendelssohn’s text when he wrote his. To be sure, the encounter of the German philosophical movement with the new development of Jewish culture does not date from this precise moment. Mendelssohn had been at that crossroads for thirty years or so, in company with Lessing. But up to this point it had been a matter of making a place for Jewish culture within German thought — which Lessing had tried to do in Die Juden — or else of identifying problems common to Jewish thought and to German philosophy; this is what Mendelssohn had done in his Phadon; oder, Über die Unsterblichkeit der Seele. With the two texts published in the Berlinische Monatschrift the German Aufklärung and the Jewish Haskala recognize that they belong to the same history; they are seeking to identify the common processes from which they stem. And it is perhaps a way of announcing the acceptance of a common destiny — we now know to what drama that was to lead.
2.  But there is more. In itself and within the Christian tradition, Kant’s text poses a new problem. It was certainly not the first time that philosophical thought had sought to reflect on its own present. But, speaking schematically, we may say that this reflection had until then taken three main forms.

– The present may be represented as belonging to a certain era of the world, distinct from the others through some inherent characteristics, or separated from the others by some dramatic event. Thus, in Plato’s Statesman the interlocutors recognize that they belong to one of those revolutions of the world in which the world is turning backwards, with all the negative consequences that may ensue.

– The present may be interrogated in an attempt to decipher in it the heralding signs of a forthcoming event. Here we have the principle of a kind of historical hermeneutics of which Augustine might provide an example.

– The present may also be analyzed as a point of transition toward the dawning of a new world. That is what Vico describes in the last chapter of La Scienza Nuova; what he sees “today”; is “a complete humanity … spread abroad through all nations, for a few great monarchs rule over this world of peoples”; it is also “Europe … radiant with such humanity that it abounds in all the good things that make for the happiness of human life”1

Now the way Kant poses the question of Aufklärung is entirely different: it is neither a world era to which one belongs, nor an event whose signs are perceived, nor the dawning of an accomplishment. Kant defines Aufklärung in an almost entirely negative way, as an Ausgang, an “exit,” a “way out.” In his other texts on history, Kant occasionally raises questions of origin or defines the internal teleology of a historical process. In the text on Aufklärung, he deals with the question of contemporary reality alone. He is not seeking to understand the present on the basis of a totality or of a future achievement. He is looking for a difference: What difference does today introduce with respect to yesterday?

3.  I shall not go into detail here concerning this text, which is not always very clear despite its brevity. I should simply like to point out three or four features that seem to me important if we are to understand how Kant raised the philosophical question of the present day.

Kant indicates right away that the “way out” that characterizes Enlightenment is a process that releases us from the status of “immaturity.” And by “immaturity,” he means a certain state of our will that makes us accept someone else’s authority to lead us in areas where the use of reason is called for. Kant gives three examples: we are in a state of “immaturity” when a book takes the place of our understanding, when a spiritual director takes the place of our conscience, when a doctor decides for us what our diet is to be. (Let us note in passing that the register of these three critiques is easy to recognize, even though the text does not make it explicit.) In any case, Enlightenment is defined by a modification of the preexisting relation linking will, authority, and the use of reason.
We must also note that this way out is presented by Kant in a rather ambiguous manner. He characterizes it as a phenomenon, an ongoing process; but he also presents it as a task and an obligation. From the very first paragraph, he notes that man himself is responsible for his immature status. Thus it has to be supposed that he will be able to escape from it only by a change that he himself will bring about in himself. Significantly, Kant says that this Enlightenment has a Wahlspruch: now a Wahlspruch is a heraldic device, that is, a distinctive feature by which one can be recognized, and it is also a motto, an instruction that one gives oneself and proposes to others. What, then, is this instruction? Aude sapere: “dare to know,” “have the courage, the audacity, to know.” Thus Enlightenment must be considered both as a process in which men participate collectively and as an act of courage to be accomplished personally. Men are at once elements and agents of a single process. They may be actors in the process to the extent that they participate in it; and the process occurs to the extent that men decide to be its voluntary actors.
A third difficulty appears here in Kant’s text in his use of the word “mankind”, Menschheit. The importance of this word in the Kantian conception of history is well known. Are we to understand that the entire human race is caught up in the process of Enlightenment? In that case, we must imagine Enlightenment as a historical change that affects the political and social existence of all people on the face of the earth. Or are we to understand that it involves a change affecting what constitutes the humanity of human beings? But the question then arises of knowing what this change is. Here again, Kant’s answer is not without a certain ambiguity. In any case, beneath its appearance of simplicity, it is rather complex.
Kant defines two essential conditions under which mankind can escape from its immaturity. And these two conditions are at once spiritual and institutional, ethical and political.

The first of these conditions is that the realm of obedience and the realm of the use of reason be clearly distinguished. Briefly characterizing the immature status, Kant invokes the familiar expression: “Don’t think, just follow orders”; such is, according to him, the form in which military discipline, political power, and religious authority are usually exercised. Humanity will reach maturity when it is no longer required to obey, but when men are told: “Obey, and you will be able to reason as much as you like.” We must note that the German word used here is räsonieren; this word, which is also used in the Critiques does not refer to just any use of reason, but to a use of reason in which reason has no other end but itself: räsonieren is to reason for reasoning’s sake. And Kant gives examples, these too being perfectly trivial in appearance: paying one’s taxes, while being able to argue as much as one likes about the system of taxation, would be characteristic of the mature state; or again, taking responsibility for parish service, if one is a pastor, while reasoning freely about religious dogmas.
We might think that there is nothing very different here from what has been meant, since the sixteenth century, by freedom of conscience: the right to think as one pleases so long as one obeys as one must. Yet it is here that Kant brings into play another distinction, and in a rather surprising way. The distinction he introduces is between the private and public uses of reason. But he adds at once that reason must be free in its public use, and must be submissive in its private use. Which is, term for term, the opposite of what is ordinarily called freedom of conscience.

But we must be somewhat more precise. What constitutes, for Kant, this private use of reason? In what area is it exercised? Man, Kant says, makes a private use of reason when he is “a cog in a machine”; that is, when he has a role to play in society and jobs to do: to be a soldier, to have taxes to pay, to be in charge of a parish, to be a civil servant, all this makes the human being a particular segment of society; he finds himself thereby placed in a circumscribed position, where he has to apply particular rules and pursue particular ends. Kant does not ask that people practice a blind and foolish obedience, but that they adapt the use they make of their reason to these determined circumstances; and reason must then be subjected to the particular ends in view. Thus there cannot be, here, any free use of reason.
On the other hand, when one is reasoning only in order to use one’s reason, when one is reasoning as a reasonable being (and not as a cog in a machine), when one is reasoning as a member of reasonable humanity, then the use of reason must be free and public. Enlightenment is thus not merely the process by which individuals would see their own personal freedom of thought guaranteed. There is Enlightenment when the universal, the free, and the public uses of reason are superimposed on one another.
Now this leads us to a fourth question that must be put to Kant’s text. We can readily see how the universal use of reason (apart from any private end) is the business of the subject himself as an individual; we can readily see, too, how the freedom of this use may be assured in a purely negative manner through the absence of any challenge to it; but how is a public use of that reason to be assured? Enlightenment, as we see, must not be conceived simply as a general process affecting all humanity; it must not be conceived only as an obligation prescribed to individuals: it now appears as a political problem. The question, in any event, is that of knowing how the use of reason can take the public form that it requires, how the audacity to know can be exercised in broad daylight, while individuals are obeying as scrupulously as possible. And Kant, in conclusion, proposes to Frederick II, in scarcely veiled terms, a sort of contract — what might be called the contract of rational despotism with free reason: the public and free use of autonomous reason will be the best guarantee of obedience, on condition, however, that the political principle that must be obeyed itself be in conformity with universal reason.
Let us leave Kant’s text here. I do not by any means propose to consider it as capable of constituting an adequate description of Enlightenment; and no historian, I think, could be satisfied with it for an analysis of the social, political, and cultural transformations that occurred at the end of the eighteenth century.
Nevertheless, notwithstanding its circumstantial nature, and without intending to give it an exaggerated place in Kant’s work, I believe that it is necessary to stress the connection that exists between this brief article and the three Critiques. Kant in fact describes Enlightenment as the moment when humanity is going to put its own reason to use, without subjecting itself to any authority; now it is precisely at this moment that the critique is necessary, since its role is that of defining the conditions under which the use of reason is legitimate in order to determine what can be known, what must be done, and what may be hoped. Illegitimate uses of reason are what give rise to dogmatism and heteronomy, along with illusion; on the other hand, it is when the legitimate use of reason has been clearly defined in its principles that its autonomy can be assured. The critique is, in a sense, the handbook of reason that has grown up in Enlightenment; and, conversely, the Enlightenment is the age of the critique.
It is also necessary, I think, to underline the relation between this text of Kant’s and the other texts he devoted to history. These latter, for the most part, seek to define the internal teleology of time and the point toward which history of humanity is moving. Now the analysis of Enlightenment, defining this history as humanity’s passage to its adult status, situates contemporary reality with respect to the overall movement and its basic directions. But at the same time, it shows how, at this very moment, each individual is responsible in a certain way for that overall process.
The hypothesis I should like to propose is that this little text is located in a sense at the crossroads of critical reflection and reflection on history. It is a reflection by Kant on the contemporary status of his own enterprise. No doubt it is not the first time that a philosopher has given his reasons for undertaking his work at a particular moment. But it seems to me that it is the first time that a philosopher has connected in this way, closely and from the inside, the significance of his work with respect to knowledge, a reflection on history and a particular analysis of the specific moment at which he is writing and because of which he is writing. It is in the reflection on “today” as difference in history and as motive for a particular philosophical task that the novelty of this text appears to me to lie.
And, by looking at it in this way, it seems to me we may recognize a point of departure: the outline of what one might call the attitude of modernity.
I know that modernity is often spoken of as an epoch, or at least as a set of features characteristic of an epoch; situated on a calendar, it would be preceded by a more or less naive or archaic premodernity, and followed by an enigmatic and troubling “postmodernity.” And then we find ourselves asking whether modernity constitutes the sequel to the Enlightenment and its development, or whether we are to see it as a rupture or a deviation with respect to the basic principles of the 18th century.
Thinking back on Kant’s text, I wonder whether we may not envisage modernity rather as an attitude than as a period of history. And by “attitude,” I mean a mode of relating to contemporary reality; a voluntary choice made by certain people; in the end, a way of thinking and feeling; a way, too, of acting and behaving that at one and the same time marks a relation of belonging and presents itself as a task. A bit, no doubt, like what the Greeks called an ethos. And consequently, rather than seeking to distinguish the “modern era” from the “premodern” or “postmodern,” I think it would be more useful to try to find out how the attitude of modernity, ever since its formation, has found itself struggling with attitudes of “countermodernity.”
To characterize briefly this attitude of modernity, I shall take an almost indispensable example, namely, Baudelaire; for his consciousness of modernity is widely recognized as one of the most acute in the nineteenth century.
1. Modernity is often characterized in terms of consciousness of the discontinuity of time: a break with tradition, a feeling of novelty, of vertigo in the face of the passing moment. And this is indeed what Baudelaire seems to be saying when he defines modernity as “the ephemeral, the fleeting, the contingent.”2 But, for him, being modern does not lie in recognizing and accepting this perpetual movement; on the contrary, it lies in adopting a certain attitude with respect to this movement; and this deliberate, difficult attitude consists in recapturing something eternal that is not beyond the present instant, nor behind it, but within it. Modernity is distinct from fashion, which does no more than call into question the course of time; modernity is the attitude that makes it possible to grasp the “heroic” aspect of the present moment. Modernity is not a phenomenon of sensitivity to the fleeting present; it is the will to “heroize” the present .
I shall restrict myself to what Baudelaire says about the painting of his contemporaries. Baudelaire makes fun of those painters who, finding nineteenth-century dress excessively ugly, want to depict nothing but ancient togas. But modernity in painting does not consist, for Baudelaire, in introducing black clothing onto the canvas. The modern painter is the one who can show the dark frock-coat as “the necessary costume of our time,” the one who knows how to make manifest, in the fashion of the day, the essential, permanent, obsessive relation that our age entertains with death. “The dress-coat and frock-coat not only possess their political beauty, which is an expression of universal equality, but also their poetic beauty, which is an expression of the public soul — an immense cortège of undertaker’s mutes (mutes in love, political mutes, bourgeois mutes…). We are each of us celebrating some funeral.”3 To designate this attitude of modernity, Baudelaire sometimes employs a litotes that is highly significant because it is presented in the form of a precept: “You have no right to despise the present.”
2. This heroization is ironical, needless to say. The attitude of modernity does not treat the passing moment as sacred in order to try to maintain or perpetuate it. It certainly does not involve harvesting it as a fleeting and interesting curiosity. That would be what Baudelaire would call the spectator’s posture. The flâneur, the idle, strolling spectator, is satisfied to keep his eyes open, to pay attention and to build up a storehouse of memories. In opposition to the flâneur, Baudelaire describes the man of modernity: “Away he goes, hurrying, searching …. Be very sure that this man … — this solitary, gifted with an active imagination, ceaselessly journeying across the great human desert — has an aim loftier than that of a mere flâneur, an aim more general, something other than the fugitive pleasure of circumstance. He is looking for that quality which you must allow me to call “modernity.” … He makes it his business to extract from fashion whatever element it may contain of poetry within history. As an example of modernity, Baudelaire cites the artist Constantin Guys. In appearance a spectator, a collector of curiosities, he remains “the last to linger wherever there can be a glow of light, an echo of poetry, a quiver of life or a chord of music; wherever a passion can pose before him, wherever natural man and conventional man display themselves in a strange beauty, wherever the sun lights up the swift joys of the depraved animal.”
3. But let us make no mistake. Constantin Guys is not a flâneur; what makes him the modern painter par excellence in Baudelaire’s eyes is that, just when the whole world is falling asleep, he begins to work, and he transfigures that world. His transfiguration does not entail an annulling of reality, but a difficult interplay between the truth of what is real and the exercise of freedom; “natural” things become “more than natural,” “beautiful” things become “more than beautiful,” and individual objects appear “endowed with an impulsive life like the soul of their creator.”5 For the attitude of modernity, the high value of the present is indissociable from a desperate eagerness to imagine it, to imagine it otherwise than it is, and to transform it not by destroying it but by grasping it in what it is. Baudelairean modernity is an exercise in which extreme attention to what is real is confronted with the practice of a liberty that simultaneously respects this reality and violates it.

4. However, modernity for Baudelaire is not simply a form of relationship to the present; it is also a mode of relationship that has to be established with oneself. The deliberate attitude of modernity is tied to an indispensable asceticism. To be modern is not to accept oneself as one is in the flux of the passing moments; it is to take oneself as object of a complex and difficult elaboration: what Baudelaire, in the vocabulary of his day, calls dandysme. Here I shall not recall in detail the well-known passages on “vulgar, earthy, vile nature”; on man’s indispensable revolt against himself; on the “doctrine of elegance” which imposes “upon its ambitious and humble disciples” a discipline more despotic than the most terrible religions; the pages, finally, on the asceticism of the dandy who makes of his body, his behavior, his feelings and passions, his very existence, a work of art. Modern man, for Baudelaire, is not the man who goes off to discover himself, his secrets and his hidden truth; he is the man who tries to invent himself. This modernity does not “liberate man in his own being”; it compels him to face the task of producing himself.
5. Let me add just one final word. This ironic heroization of the present, this transfiguring play of freedom with reality, this ascetic elaboration of the self — Baudelaire does not imagine that these have any place in society itself, or in the body politic. They can only be produced in another, a different place, which Baudelaire calls art.

I do not pretend to be summarizing in these few lines either the complex historical event that was the Enlightenment, at the end of the eighteenth century, or the attitude of modernity in the various guises it may have taken on during the last two centuries.
I have been seeking, on the one hand, to emphasize the extent to which a type of philosophical interrogation — one that simultaneously problematizes man’s relation to the present, man’s historical mode of being, and the constitution of the self as an autonomous subject — is rooted in the Enlightenment. On the other hand, I have been seeking to stress that the thread that may connect us with the Enlightenment is not faithfulness to doctrinal elements, but rather the permanent reactivation of an attitude — that is, of a philosophical ethos that could be described as a permanent critique of our historical era. I should like to characterize this ethos very briefly.
A. Negatively
This ethos implies, first, the refusal of what I like to call the “blackmail” of the Enlightenment. I think that the Enlightenment, as a set of political, economic, social, institutional, and cultural events on which we still depend in large part, constitutes a privileged domain for analysis. I also think that as an enterprise for linking the progress of truth and the history of liberty in a bond of direct relation, it formulated a philosophical question that remains for us to consider. I think, finally, as I have tried to show with reference to Kant’s text, that it defined a certain manner of philosophizing.

But that does not mean that one has to be “for” or “against” the Enlightenment. It even means precisely that one has to refuse everything that might present itself in the form of a simplistic and authoritarian alternative: you either accept the Enlightenment and remain within the tradition of its rationalism (this is considered a positive term by some and used by others, on the contrary, as a reproach); or else you criticize the Enlightenment and then try to escape from its principles of rationality (which may be seen once again as good or bad). And we do not break free of this blackmail by introducing “dialectical” nuances while seeking to determine what good and bad elements there may have been in the Enlightenment.

We must try to proceed with the analysis of ourselves as beings who are historically determined, to a certain extent, by the Enlightenment. Such an analysis implies a series of historical inquiries that are as precise as possible; and these inquiries will not be oriented retrospectively toward the “essential kernel of rationality” that can be found in the Enlightenment and that would have to be preserved in any event; they will be oriented toward the “contemporary limits of the necessary,” that is, toward what is not or is no longer indispensable for the constitution of ourselves as autonomous subjects.

This permanent critique of ourselves has to avoid the always too facile confusions between humanism and Enlightenment.
We must never forget that the Enlightenment is an event, or a set of events and complex historical processes, that is located at a certain point in the development of European societies. As such, it includes elements of social transformation, types of political institution, forms of knowledge, projects of rationalization of knowledge and practices, technological mutations that are very difficult to sum up in a word, even if many of these phenomena remain important today. The one I have pointed out and that seems to me to have been at the basis of an entire form of philosophical reflection concerns only the mode of reflective relation to the present.

Humanism is something entirely different. It is a theme or rather a set of themes that have reappeared on several occasions over time in European societies; these themes always tied to value judgments have obviously varied greatly in their content as well as in the values they have preserved. Furthermore they have served as a critical principle of differentiation. In the seventeenth century there was a humanism that presented itself as a critique of Christianity or of religion in general; there was a Christian humanism opposed to an ascetic and much more theocentric humanism. In the nineteenth century there was a suspicious humanism hostile and critical toward science and another that to the contrary placed its hope in that same science. Marxism has been a humanism; so have existentialism and personalism; there was a time when people supported the humanistic values represented by National Socialism and when the Stalinists themselves said they were humanists.

From this we must not conclude that everything that has ever been linked with humanism is to be rejected but that the humanistic thematic is in itself too supple too diverse too inconsistent to serve as an axis for reflection. And it is a fact that at least since the seventeenth century what is called humanism has always been obliged to lean on certain conceptions of man borrowed from religion science or politics. Humanism serves to color and to justify the conceptions of man to which it is after all obliged to take recourse.

Now in this connection I believe that this thematic which so often recurs and which always depends on humanism can be opposed by the principle of a critique and a permanent creation of ourselves in our autonomy: that is a principle that is at the heart of the historical consciousness that the Enlightenment has of itself. From this standpoint I am inclined to see Enlightenment and humanism in a state of tension rather than identity.

In any case it seems to me dangerous to confuse them; and further it seems historically inaccurate. If the question of man of the human species of the humanist was important throughout the eighteenth century this is very rarely I believe because the Enlightenment considered itself a humanism. It is worthwhile too to note that throughout the nineteenth century the historiography of sixteenth-century humanism which was so important for people like Saint-Beuve or Burckhardt was always distinct from and sometimes explicitly opposed to the Enlightenment and the eighteenth century. The nineteenth century had a tendency to oppose the two at least as much as to confuse them.

In any case I think that just as we must free ourselves from the intellectual blackmail of being for or against the Enlightenment we must escape from the historical and moral confusionism that mixes the theme of humanism with the question of the Enlightenment. An analysis of their complex relations in the course of the last two centuries would be a worthwhile project an important one if we are to bring some measure of clarity to the consciousness that we have of ourselves and of our past.
B. Positively
Yet while taking these precautions into account we must obviously give a more positive content to what may be a philosophical ethos consisting in a critique of what we are saying thinking and doing through a historical ontology of ourselves.
1. This philosophical ethos may be characterized as a limit-attitude. We are not talking about a gesture of rejection. We have to move beyond the outside-inside alternative; we have to be at the frontiers. Criticism indeed consists of analyzing and reflecting upon limits. But if the Kantian question was that of knowing what limits knowledge has to renounce transgressing, it seems to me that the critical question today has to be turned back into a positive one: in what is given lo us as universal necessary obligatory what place is occupied by whatever is singular contingent and the product of arbitrary constraints? The point in brief is to transform the critique conducted in the form of necessary limitation into a practical critique that lakes the form of a possible transgression.

This entails an obvious consequence: that criticism is no longer going to be practiced in the search for formal structures with universal value, but rather as a historical investigation into the events that have led us to constitute ourselves and to recognize ourselves as subjects of what we are doing, thinking, saying. In that sense, this criticism is not transcendental, and its goal is not that of making a metaphysics possible: it is genealogical in its design and archaeological in its method. Archaeological — and not transcendental — in the sense that it will not seek to identify the universal structures of all knowledge or of all possible moral action, but will seek to treat the instances of discourse that articulate what we think, say, and do as so many historical events. And this critique will be genealogical in the sense that it will not deduce from the form of what we are what it is impossible for us to do and to know; but it will separate out, from the contingency that has made us what we are, the possibility of no longer being, doing, or thinking what we are, do, or think. It is not seeking to make possible a metaphysics that has finally become a science; it is seeking to give new impetus, as far and wide as possible, to the undefined work of freedom.
2. But if we are not to settle for the affirmation or the empty dream of freedom, it seems to me that this historico-critical attitude must also be an experimental one. I mean that this work done at the limits of ourselves must, on the one hand, open up a realm of historical inquiry and, on the other, put itself to the test of reality, of contemporary reality, both to grasp the points where change is possible and desirable, and to determine the precise form this change should take. This means that the historical ontology of ourselves must turn away from all projects that claim to be global or radical. In fact we know from experience that the claim to escape from the system of contemporary reality so as to produce the overall programs of another society, of another way of thinking, another culture, another vision of the world, has led only to the return of the most dangerous traditions.

I prefer the very specific transformations that have proved to be possible in the last twenty years in a certain number of areas that concern our ways of being and thinking, relations to authority, relations between the sexes, the way in which we perceive insanity or illness; I prefer even these partial transformations that have been made in the correlation of historical analysis and the practical attitude, to the programs for a new man that the worst political systems have repeated throughout the twentieth century.

I shall thus characterize the philosophical ethos appropriate to the critical ontology of ourselves as a historico-practical test of the limits that we may go beyond, and thus as work carried out by ourselves upon ourselves as free beings.
3. Still, the following objection would no doubt be entirely legitimate: if we limit ourselves to this type of always partial and local inquiry or test, do we not run the risk of letting ourselves be determined by more general structures of which we may well not be conscious, and over which we may have no control?

To this, two responses. It is true that we have to give up hope of ever acceding to a point of view that could give us access to any complete and definitive knowledge of what may constitute our historical limits. And from this point of view the theoretical and practical experience that we have of our limits and of the possibility of moving beyond them is always limited and determined; thus we are always in the position of beginning again.

But that does not mean that no work can be done except in disorder and contingency. The work in question has its generality, its systematicity, its homogeneity, and its stakes.
Its Stakes
These are indicated by what might be called “the paradox of the relations of capacity and power.” We know that the great promise or the great hope of the eighteenth century, or a part of the eighteenth century, lay in the simultaneous and proportional growth of individuals with respect to one another. And, moreover, we can see that throughout the entire history of Western societies (it is perhaps here that the root of their singular historical destiny is located — such a peculiar destiny, so different from the others in its trajectory and so universalizing, so dominant with respect to the others), the acquisition of capabilities and the struggle for freedom have constituted permanent elements. Now the relations between the growth of capabilities and the growth of autonomy are not as simple as the eighteenth century may have believed. And we have been able to see what forms of power relation were conveyed by various technologies (whether we are speaking of productions with economic aims, or institutions whose goal is social regulation, or of techniques of communication): disciplines, both collective and individual, procedures of normalization exercised in the name of the power of the state, demands of society or of population zones, are examples. What is at stake, then, is this: How can the growth of capabilities be disconnected from the intensification of power relations?
This leads to the study of what could be called “practical systems.” Here we are taking as a homogeneous domain of reference not the representations that men give of themselves, not the conditions that determine them without their knowledge, but rather what they do and the way they do it. That is, the forms of rationality that organize their ways of doing things (this might be called the technological aspect) and the freedom with which they act within these practical systems, reacting to what others do, modifying the rules of the game, up to a certain point (this might be called the strategic side of these practices). The homogeneity of these historico-critical analyses is thus ensured by this realm of practices, with their technological side and their strategic side.
These practical systems stem from three broad areas: relations of control over things, relations of action upon others, relations with oneself. This does not mean that each of these three areas is completely foreign to the others. It is well known that control over things is mediated by relations with others; and relations with others in turn always entail relations with oneself, and vice versa. But we have three axes whose specificity and whose interconnections have to be analyzed: the axis of knowledge, the axis of power, the axis of ethics. In other terms, the historical ontology of ourselves has to answer an open series of questions; it has to make an indefinite number of inquiries which may be multiplied and specified as much as we like, but which will all address the questions systematized as follows: How are we constituted as subjects of our own knowledge? How are we constituted as subjects who exercise or submit to power relations? How are we constituted as moral subjects of our own actions?
Finally, these historico-critical investigations are quite specific in the sense that they always bear upon a material, an epoch, a body of determined practices and discourses. And yet, at least at the level of the Western societies from which we derive, they have their generality, in the sense that they have continued to recur up to our time: for example, the problem of the relationship between sanity and insanity, or sickness and health, or crime and the law; the problem of the role of sexual relations; and so on.
But by evoking this generality, I do not mean to suggest that it has to be retraced in its metahistorical continuity over time, nor that its variations have to be pursued. What must be grasped is the extent to which what we know of it, the forms of power that are exercised in it, and the experience that we have in it of ourselves constitute nothing but determined historical figures, through a certain form of problematization that defines objects, rules of action, modes of relation to oneself. The study of modes of problematization (that is, of what is neither an anthropological constant nor a chronological variation) is thus the way to analyze questions of general import in their historically unique form.

A brief summary, to conclude and to come back to Kant.

I do not know whether we will ever reach mature adulthood. Many things in our experience convince us that the historical event of the Enlightenment did not make us mature adults, and we have not reached that stage yet. However, it seems to me that a meaning can be attributed to that critical interrogation on the present and on ourselves which Kant formulated by reflecting on the Enlightenment. It seems to me that Kant’s reflection is even a way of philosophizing that has not been without its importance or effectiveness during the last two centuries. The critical ontology of ourselves has to be considered not, certainly, as a theory, a doctrine, nor even as a permanent body of knowledge that is accumulating; it has to be conceived as an attitude, an ethos, a philosophical life in which the critique of what we are is at one and the same time the historical analysis of the limits that are imposed on us and an experiment with the possibility of going beyond them.

This philosophical attitude has to be translated into the labor of diverse inquiries. These inquiries have their methodological coherence in the at once archaeological and genealogical study of practices envisaged simultaneously as a technological type of rationality and as strategic games of liberties; they have their theoretical coherence in the definition of the historically unique forms in which the generalities of our relations to things, to others, to ourselves, have been problematized. They have their practical coherence in the care brought to the process of putting historico-critical reflection to the test of concrete practices. I do not know whether it must be said today that the critical task still entails faith in Enlightenment; I continue to think that this task requires work on our limits, that is, a patient labor giving form to our impatience for liberty.


Cosmos and History: The Journal of Natural and Social Philosophy, vol. 7, no. 2, 2011

David Storey, Professor of Philosophy, Fordham

ABSTRACT: Though nihilism is a major theme in late modern philosophy from Hegel onward, it is only relatively recently that it has been treated as the subject of monographs and anthologies. Commentators have offered a number of accounts of the origins and nature of nihilism. Some see it as a purely historical and predominantly modern phenomenon, a consequence of the social, economic, ecological, political, and/or religious upheavals of modernity. Others think it stems from human nature itself, and should be seen as a perennial problem. Still others think that nihilism has ontological significance and issues from the nature of being itself. In this essay, I survey the most important of these narratives of nihilism to show
how commonly the advent and spread of nihilism is linked with changing conceptions of (humanity’s relation to) nature. At root, nihilism is a problem about humanity’s relation to nature, about a crisis in human freedom and willing after the collapse of the cosmos, the erosion of a hierarchically ordered nature in which humans have a proper place. Two themes recur in the literature: first, the collapse of what is commonly called the “great chain of being” or the cosmos generally; and second, the increased importance placed on human will and subjectivity and, correlatively, the significance of human history as opposed to nature.

KEYWORDS: Nihilism; Nature; Cosmos

We typically regard nihilism as a problem about human life. While Nietzsche and Heidegger are undoubtedly the thinkers most closely associated with nihilism, it has an important history (predominantly in Europe) before them and has led an interesting life (especially in American culture) after them. Nietzsche’s proclamation, “God is dead!”, has been taken as the historical and philosophical fountainhead of European nihilism. As with any idea, however, the history of nihilism is more complex, and over the last half-century a handful of scholars have set out to trace its elusive arc.1 Though nihilism is a major theme in late modern philosophy from Hegel onward, it is only relatively recently that it has been treated as the subject of monographs and anthologies. Commentators have offered a number of accounts of the origins and nature of nihilism. Some see it as a purely historical and predominantly modern phenomenon, a consequence of the social, economic, ecological, political, and/or religious upheavals of modernity. Others think it stems from human nature itself, and should be seen as a perennial problem. Still others think that nihilism has ontological significance and issues from the nature of being itself. In this essay, I survey the most important of these narratives of nihilism to show
how commonly the advent and spread of nihilism is linked, as it is by Nietzsche and Heidegger, with changing conceptions of (humanity’s relation to) nature. At root, nihilism is a problem about humanity’s relation to nature, about a crisis in human freedom and willing after the collapse of the cosmos, the erosion of a hierarchically ordered nature in which humans have a proper place. Two themes recur in the literature: first, the collapse of what is commonly called the “great chain of being”2 or the cosmos generally; and second, the increased importance placed on human will and subjectivity and, correlatively, the significance of human history as opposed to nature.
Nihilism originated as a distinct philosophical concept in the 18th century. As Michael Gillespie reports, “the concept of nihilism first came into general usage as a description of the danger [German] idealism posed for the intellectual, spiritual, and political health of humanity. The first to use the term in print was apparently F. L. Goetzius in his De nonismo et nihilism in theologia (1733).”3 Tracts portraying Kantian critical philosophy as a form of nihilism appeared near the end of the century, but it would fall to F.H. Jacobi to give the first explicit formulation of the concept. Convinced that idealism posed an existential threat to traditional Christian belief,
Jacobi attacked both Kant and Fichte, the former in his essay, “Idealism and Nihilism,” and the latter in a letter to Fichte in 1799. He branded Fichte’s philosophy as nihilism by drawing a stark contrast between a steadfast faith in a God beyond human subjectivity and an insatiable reason that, as Otto Poeggeler puts it, “perceives only itself” and “dissolves everything that is given into the nothingness of subjectivity.”4 Jacobi believed that idealism entailed a lopsided focus on human subjectivity that not only shut out the divine, but severed itself from any external
reality whatsoever, including nature. If things-in-themselves cannot be cognized, and actuality itself is but a category of the understanding, then it seems to follow that things-in-themselves do not actually exist. Idealism shifts, to use Gilson’s formulation, from the “exterior to the interior,” but does not make the move from the “interior to the superior”; in fact, it does not “move” at all, since the exterior—nature—is regarded as a realm of mere appearances. For Jacobi, it is only through a decisive act of will, a recognition of the stark either/or before us and a resolute commitment to God, that humans can find their proper place. As Jacobi challenges Fichte: “God is and is outside of me, a living essence that subsists for itself, or I am God. There is no third possibility.”5
Three things stand out in this passage. First, Jabobi is simultaneously charging Fichte with pantheism and atheism, positions he regards as basically identical. Before mounting his assault on idealism, Jacobi had argued that Spinoza’s pantheism was actually atheism. Jacobi seems to have regarded Fichte’s idealism as a doomed attempt to marry the focus on freedom in Descartes and Kant to Spinoza’s holistic and divinized view of nature. So nihilism is portrayed as emerging, roughly speaking, out of attempts to integrate modern conceptions of freedom and nature. Second, Jacobi’s denial of a “third way” is, as we will see, a common complaint among critics of nihilism, or of philosophies alleged to be nihilistic. Those who cannot accept the basic dualities and either/or’s of existence, so the thinking goes, attempt to sublate them in elaborate monistic philosophies that bend logic and language beyond their breaking points in order to chart a third way–to, in Kierkegaard’s turn of biblical phrase, join what God has separated. The attempt to include everything ends up embracing nothing. Third, it is more than a little ironic that Jacobi’s fideistic focus on the will, intended as an antidote to nihilism, would later be pointed to as a symptom of nihilism by Nietzsche because the will is directed toward a false object (God) and by Heidegger because the triumph of the will in modern thought is the fruition of the ancient seed of metaphysics, the drive to frame being as presence. With this story of the origin of the concept of nihilism in place, let us take a look at some of the most sustained attempts to determine the nature of nihilism.’

Nishitani Keiji. Despite nihilism’s presence at the birth of German idealism (and prominence after its death), it was not to be made a subject of study in its own right until the 1930s and ‘40s, by Karl Löwith and the unlikely figure of Nishitani Keiji. Nishitani was a member of Japan’s Kyoto School, a vanguard of Japanese intellectuals, many of whom travelled to Germany to study with leading European thinkers and endeavored to integrate modern Western philosophy, particularly Nietzsche, Heidegger and the German Idealists, with Buddhist thought.6 Graham
Parkes suggests that since, e.g., the Buddhist tradition never took substance or presence as foundational philosophical categories, it is no accident that one of the first relatively unified statements on nihilism was made by a non-Western philosopher: “Nishitani’s perspective has allowed him to see as more unified than Western commentators the stream of nihilism which springs from the decline of Hegelian philosophy through Feuerbach, Stirner, and Schopenhauer to Nietzsche and Heidegger.”7 In other words, from a Buddhist perspective rooted in the belief that all things are empty, finite, and lacking in “own-being,” the Western notions of being as
standing presence or stable substance are obviously a poor foundation to build on.
The hallmarks of Nishitani’s approach to nihilism in this text are a rigorous analysis of Nietzsche’s treatment of nihilism, a spirited defense of Nietzsche’s solution, the application of Buddhist conceptual tools to the problem, and a critique of atheistic positions such as those of Stirner, Marx, and Sartre. He argues that Heidegger’s significance in the history of nihilism lies in his insistence on its connection to ontology: “Heidegger gives us nothing less than an ontology within which nihilism becomes a philosophy. By disclosing nothing at the ground of all beings and summoning it forth, nihilism becomes the basis of a new metaphysics.”8 One of the most important contributions of Nishitani’s account is his insistence that the deepest significance of nihilism is ontological, not merely psychological or cultural, and that its rise in modern Western philosophy is a symptom of a failure to adequately grapple with the concept of the nothing. Karl Löwith. If Nishitani’s approach to nihilism has the virtue of distance, Karl Löwith’s has the advantage of proximity.9 A student of Heidegger and an eye-witness to the real-world ravages of political nihilism in the rise of Nazism, Löwith provides a detailed account of the prominent role nihilism played in post-Hegelian European thought and culture, and he offers a rich account of the intellectual and cultural trends that culminated in Heidegger’s philosophy. On Löwith’s telling,

Ever since the middle of the [19th] century, the construction of the history of Europe has not proceeded according to a schema of progress, but instead according to that of decline. This change began not at the end of the century but rather at its beginning, with Fichte’s lectures, which he saw as an age of ‘perfected iniquity.’ From there, there proceeds through European literature and philosophy an uninterrupted chain of critiques…which decisively condition
not simply the academic but the actual intellectual history between Hegel and Nietzsche. The state of Being in decline along with one’s own time is also the ground and soil for Heidegger’s ‘destruction,’ for his will to dismantle and rebuild, back to the foundations of a tradition which has become untenable.10
Fichte’s indictment of the present age would be the prototype for a long list of scathing critiques of modern society, from Kierkegaard’s The Present Age to Nietzsche’s Untimely Meditations. Once Hegel had, as Löwith puts it, “made the negation of what exists” the principle of genuine philosophy, the task of philosophy would widely become identified with Zeitdiagnose, and the role of the philosopher was to become, as Nietzsche put it, the physician of culture. Löwith shows how this spirit is embodied by thinkers as disparate as Marx and Kierkegaard:

Marx’s worldly critique of the bourgeois-capitalist world corresponds to Kierkegaard’s critique of the bourgeois-Christian world, which is as foreign to Christianity in its origins as the bourgeois or civil state is to a polis. That Marx places the outward existential relations of the masses before a decision and Kierkegaard the inward existential relation of the individual to himself, that Marx philosophizes without God and Kierkegaard before God—these apparent  oppositions have as a common presupposition the decay of existence along with
God and the world.11
Both thinkers, he continues, “conceived ‘what is’ as a world determined by commodities and money, and as an existence defined throughout by irony and boredom.”12 Marx’s assertion of a purely “human” world and Kierkegaard’s espousal of a “worldless Christianity” both share in common the severance of the human from the natural. For Marx, nature is merely the positum there to be negated and appropriated by human labor. For Kierkegaard, as Walter Kaufmann quips, nature is irrelevant to human life: “He sweeps away the whole conception of a cosmos as a mere distraction… Here is man, and ‘one thing is needful’: a decision.”13 Hans Jonas, another of Heidegger’s students, detected a similar problem with Heidegger’s own account of human existence: namely, that it did not place humans within any kind of scala natura that is the locus of value. Löwith’s larger point, though, is that the disintegration of the Hegelian vision resulted in a grab bag of incompatible viewpoints usually consisting of a scathing critique of the present, a longing for a lost age, and/or a radical program for individual or social renewal.

C.S. Lewis. Another vital voice in the discourse on nihilism—and who also saw firsthand the fallout from political nihilism in the world wars of the 20th century—is C.S. Lewis. Though Lewis does not explicitly mention the specter of nihilism in his classic The Abolition of Man, he clearly laments its corrosive effects on Western civilization and insists it arose largely due to a disruption in humanity’s relationship to nature. The abolition of human nature, he hypothesizes, is the unintended consequence of the attempt to bend nature to human purposes and is the endgame of scientific naturalism. Moreover, this attempt to defeat nature and scrub it free of undesirables results, paradoxically, in nature’s total victory. The more of reality we concede to the objective, value-free domain of “mere nature,” the less free we become; or more precisely, the more freedom becomes a curse, because its polestars for navigating the field of possibilities—an objective morality rooted in nature or the “Tao,” Lewis’ catchall phrase for premodern notions of nature as a cosmos to which humans must conform—have been snuffed out. The human is left with nothing but his drives and instincts to decide how to act; he is left, in other words, with nothing but nature to guide him. But since this is not a cosmic nature with a logos, an ordered hierarchy of matter, body, soul, and spirit, but a nature bereft of reason or moral value, and since reason has been downgraded to a tool and morality whittled down to
a matter of preference, it is a matter of the blind leading the blind; a matter, in short, of nihilism. What happens, then, is that whatever someone happens to prefer is called natural. Somehow, the attempt to make everything “natural” ends up denaturing the very notion of nature.

Stanley Rosen and Allan Bloom. Two writers who made similar observations about nihilism were both students of the political philosopher Leo Strauss: Stanley Rosen and Allan Bloom. Both trace the phenomenon to a gradual shift in the reigning conceptions of reason, morality, and nature throughout the modern period. Like Lewis, Rosen describes nihilism as partly the collapse in the belief in objective moral truths, which is abetted by the widespread adoption of a non-normative, instrumental view of reason. Once the will is decoupled from the intellect and no longer choosing from among the ends the intellect presents to it, and once the logos is removed from nature, then there are no longer any objective moral truths that the intellect can apprehend and present to the will as worthy candidates for action. Everything falls to the will, and since the will cannot furnish reasons for acting one way or another—and since reason itself has been relieved of command to do so—then everything is permitted. Rosen defines nihilism in this Nietzschean sense, and asserts that “For those who are not gods, recourse to a [value] creation ex nihilo…reduces reason to nonsense by equating the sense or significance of speech with silence.”14

While nihilism is often regarded primarily as a moral position, e.g., value relativism, Rosen contends that the moral implications are in fact derivative and stem from a “contemporary crisis in reason” rooted in the problem of historicism. Rosen defines historicism as “the view that rational speech about the good is possible only with respect to the meaning of history” and “the inability to distinguish being and time.”15 Historicism was ironically the unintended consequence of an attempted expansion of reason: “the influence of mathematical physics led to the secularization of metaphysics by transforming it into the philosophy of history, whereupon the
influence of history, together with the autonomous tendencies of the mathematizing ego, led to the historicizing of mathematical physics.”16 In other words, while the premodern task of philosophy, generally speaking, was (partly) to discern the unchanging logos within nature, in the modern period it is expanded to tracing the logos within history—but this leads, somehow, to the paradoxical view that all rational speech is reducible to historical, i.e., contingent, conditions. The strange thing is that such a nihilism can equally accommodate the view that “everything is
natural”—since there is no reason or necessity governing human affairs and action, they are merely an arbitrary matter of chance, will, or instinct–and “nothing is natural”—since there are no trans-historical or trans-cultural metaphysical or moral truths and everything, including theses about nature, is a product of history. Rosen insists that the notion of “creativity” played an important part in this process. According to this view, a person’s moral life consists not in obeying the dictates of a conscience common to all or by acting in accordance with his rationally knowable nature, but by being faithful to the oracle of his inner genius, the natural creativity welling up from below. Once creativity, not reason, is enshrined as the center of gravity in human nature, the next logical step is to adopt the view that all speech about being—all philosophy, science, and mathematics—is poetry. Rosen thinks that the influence of historicism on the view of reason and metaphysics, and the effect of the notion of creativity on the view of morality and human nature, are the main causes of the advent of nihilism: “the fundamental problem in a study of nihilism is to dissect the language of historicist ontology with the associated doctrine of human creativity.”17 Heidegger and Nietzsche are the most important thinkers in this drama; Heidegger because of his attempt to think being in terms of time, and
Nietzsche because of his reduction of all human faculties to a creative will to power. Though their diagnoses of nihilism are unparalleled, Rosen thinks their solutions are flawed because both are victims of the modern “rationalistic view of reason”:

By detaching ‘reasonable’ from ‘good,’ the friends of reason made it impossible to assert the goodness of reason…. If reason is conceived exclusively on the model of mathematics, and if mathematics is itself understood in terms of Newtonian rather than Pythagorean science, then the impossibility of asserting the goodness of reason is the extreme instance of the manifest evil of reason.
Reason (we are told) objectifies, reifies, alienates; it debases or destroys the genuinely human…. Man has become alienated from his own authentic or creative existence by the erroneous projection of the supersensible world of Platonic ideas…and so of an autonomous technology, which, as the authentic contemporary historical manifestation of ‘rationalism,’ will destroy us or enslave
us to machines.18
As such, since the good was not to be found by the light of reason, it had to found somewhere else; but since the very notion of good becomes unintelligible when severed from reason, it was nowhere to be found, and thus had to be created. But since the goodness of this creativity consists in its spontaneity and novelty, it must supply its own criterion and guarantee its own legitimacy. Allan Bloom devotes the middle act of his The Closing of the American Mind to what
he calls “Nihilism, American Style.” Despite its popular acclaim, the book contains a sophisticated account of nihilism. Though the tenor of his treatment is similar to Rosen’s and though both thinkers emphasize the connection between nihilism and the modern view of nature, Bloom’s account is unique on at least two fronts. First, he illustrates how nihilism has been democratized, normalized, and neutered in American culture; this watered down, latter day version of nihilism represents, for Bloom, the victory of Nietzsche’s “last man.” Second, where for Rosen the main root of nihilism is the conception of reason that arose out of the scientific revolution, for Bloom it is the major shifts in modern political philosophy. I will briefly illustrate
these two fronts.
In Bloom’s genealogy of nihilism, what was once the province of the German high culture of the 19th and early 20th century—the intellectual skyline so exquisitely sketched by Löwith—has been transfused into American popular culture and slang. The post-World War Two generation came to employ a menagerie of terms—“values,” “lifestyle,” “creativity,” “the self,” and “culture,” to name a few—to replace traditional social and religious norms, but divested them of their original meanings, or at least their implications. “Weber,” Bloom observes, “saw that all we care for was threatened by Nietzsche’s insight [that God is dead]…. We require values, which in turn require a peculiar human creativity that is drying up and in any event has no cosmic support.”19 But instead of introducing a mood of despair and a sense of the tragic, nihilism was parlayed into an ethos of self-help, the psychology of self-esteem, a therapeutic culture, and a glib relativism. As Bloom writes, “There is a whole arsenal of terms for talking about nothing—caring, self-fulfillment, expanding consciousness….Nothing determinate, nothing that has a referent…. American nihilism is a mood, a mood of moodiness, a vague disquiet. It is nihilism without the abyss (CAM 154). What irks Bloom is that Americans embraced the language of value and creativity with such ease, without gleaning their darker implications and ignorant of the turbulent intellectual, cultural, and political history that produced them. Reminiscent of Heidegger’s discussion of idle talk, Bloom notes how the nostrums of nihilism calcify into democratic dogma: “these words are not reasons, nor were they intended to be reasons. All to the contrary, they were meant to show that our deep human need to know what we are doing and to be good cannot be satisfied. By some miracle these very terms became our justification: nihilism as moralism” (CAM 238-9). This form of nihilism is the most insidious because the most unconscious, what Nietzsche called “passive nihilism.” It is the most unconscious because its victims are unaware of their condition and incapable of contemplating alternatives.

As we saw with Löwith, the prevailing outlook in European nihilism is one of pessimism and historical decline; but on American soil, seasoned with the spirits of egalitarianism and perpetual progress, nihilism winds up with a “happy ending” and wears a happy face. Bloom thinks this improbable syncretism is more than a fascinating social and cultural phenomenon and has deep philosophical import because it perfectly embodies Nietzsche’s vision of the “last man,” the contented being who lives only for the present and is incapable of self-contempt or reverence for anything greater: “Nihilism in its most palpable sense means that the bourgeois has won, that the future, all foreseeable futures, belong to him, that all heights above him and all depths beneath him are illusory and that life is not worth living on these terms. It is the announcement that all alternatives or correctives…have failed” (CAM 157). Bloom shares with Rosen the view that “Western rationalism has resulted in a rejection of reason,” and thinks that we live, in John Ralston Saul’s term, in an “unconscious civilization”: “We are like ignorant shepherds living on a site where great civilizations once flourished. The shepherds play with the fragments that pop up to the surface, having no notion of the beautiful structures of which they were once a part” (CAM 239).

Bloom is convinced that most of this stems from the revolution in modern political thought brought about by Hobbes, Locke, and Rousseau. Whereas the ancients, generally speaking, relegated the best regime to the realm of speech and thought, doubtful about its possible instantiation in history, the moderns aimed to put the best regime into practice. One of the most important instruments for doing so was positing a “state of nature,” a primal condition from which humanity extricates itself in order to achieve an optimal way of communal life. A stark contrast has to be created between the natural and social orders in order for the rationality, legitimacy, and desirability of the political order to stick. Nature has to be branded as indifferent if not hostile to human flourishing in order for the project to make sense, and human nature must be redrawn as a- or pre-political. As Bloom puts it, “Hobbes, Locke, and Rousseau all found that one way or another nature led men to war, and that civil society’s purpose was not to cooperate with a natural tendency in man toward perfection but to make peace where nature’s imperfection causes war” (CAM 163). Moreover, nature’s obstacles have to be conceived as surmountable through applied science: “if, instead of fighting one another, we band together and make war on our stepmother [nature], who keeps her riches from us, we can at the same time provide for ourselves and end our strife. The conquest of nature, which is made possible by the insight of science and by the power it produces, is the key to the political” (CAM 165). But nature has to be conquered in two senses. Before it can be literally conquered via applied science, it must be theoretically transformed from a great chain of being, a cosmos, into an ontologically homogenous plane of extended matter in motion. Just as nature is reduced to its lowest common denominator, politics comes to be based not on virtue or the good, but on the most basic human drives: the fear of death, the desire for comfort, and the goal of self-preservation. This lowering of the human center of gravity—what Strauss called the “low but solid ground”20 on which the moderns built—is what eventually leads to Nietzsche’s last man .

However, this foundation is highly unstable and its implications are deeply ambiguous. Rousseau was the first to tap the fissure that would grow into the abyss addressed by Nietzsche, and this gap has to do with the new concept of nature. As Bloom writes, “For Hobbes and Locke nature is near and unattractive, and man’s movement into society was easy and unambiguously good. For Rousseau nature is distant and attractive, and the move was hard and divided man” (CAM 169). Rousseau, Bloom writes, realizes just how difficult it is to sever the ontological bond between nature and human nature, and that the attempt to do so creates great confusion: “Now there are two competing views about man’s relation to nature, both
founded on the modern distinction between nature and society. Nature is the raw material of man’s freedom from harsh necessity, or else man is the polluter of nature. Nature in both cases means dead nature, or nature without man and untouched by man…” (CAM 173). One view sees nature as the problem, while the other sees humanity as the problem; but both views, and all three thinkers, share the prejudice that nature is “dead,” i.e., bereft of soul or subjectivity and flatly opposed to the human order of history, politics, and society. Bloom gives an excellent summary of the difference between the ancient and modern views of nature:

[In the modern view,] all higher purposiveness in nature, which might have been consulted by men’s reason and used to limit human passion, had disappeared. Nature tells us nothing about man specifically and provides no imperatives for his conduct…. Man somehow remains a part of nature, but in a different and much more problematic way than in, say, Aristotle’s philosophy, where soul is at the center and what is highest in man is akin to what is highest in nature, or where soul is nature. Man is really only a part and not the microcosm. Nature has no rank order or hierarchy of being, nor does the self (CAM 176).
This is the consequence of the collapse of the cosmos, the same disproportion between humanity and nature that Rosen points to. There are no “natural limits” to the passions, because only the passions are natural, and all claims of reason are taken to be in some way derived from or motivated by them. Humans have longings that formerly would have been correlated with dimensions of the cosmos, but since the higher levels of the great chain have been shorn off, leaving only the “low but solid ground,” Rousseau, determined to reprise the pursuit of wholeness that was formerly headed by reason, had nowhere to go but “back” before society and “down” into the pre-rational nether reaches of human nature. Rousseau was seeking the norms that he would try to incorporate in his political vision, primarily equality. Since reason—which Rousseau, much like Heidegger, interprets as calculation—is responsible for disrupting the equality of the state of nature, it cannot be the source of the ideal order; instead, the sources for bringing about a harmony between humans and nature are freedom and sympathy. In showing that the so-called “natural” bases of human life according to Hobbes and Locke were actually stones laid down by society, Rousseau attempted to drill down to the real state of nature, but ended up opening pandora’s box: “Having cut off the higher aspirations of man, those connected with the soul, Hobbes and Locke hoped to find a floor beneath him, which Rousseau removed….And there, down below, Rousseau discovered all the complexity that, in the days before Machiavelli, was up on high…. It is here that the abyss opened up” (CAM 176-7). This is the fountainhead of what would become Nietzschean nihilism and eventuate in value-relativism.

Donald Crosby. While Rosen and Bloom give a heavily historical account of the rise of nihilism, Donald Crosby offers perhaps the most systematic and analytical account in The Specter of the Absurd: Sources and Criticisms of Modern Nihilism, detailing its different types, reconstructing the myriad arguments in its favor, and exposing its philosophical and theological sources. Like both of them, though, he effectively shows how nihilism is a pervasive power in modern thought that underwrites seemingly contrary philosophical positions, such as voluntarism and determinism, and plagues thinkers as different as Jean-Paul Sartre and Bertrand Russell. But he follows Nietzsche and Heidegger in holding that Greek metaphysics and especially Christianity prepare the way for nihilism, and maintains that other traditions, such as process thought, might provide us with resources for confronting it. Moreover, Crosby follows Lewis in
calling for a new conception of nature, insisting, with philosopher of science Ivor Leclerc, that to combat nihilism, “what is urgently needed…is a restoration of the philosophy of nature to its former position in the intellectual life of our culture, a position it had prior to the scientific revolution and continued to have up to the triumph of Newtonian physics in the 18th century.”21

A) Types of Nihilism. Crosby describes five types of nihilism: political, moral, epistemological, cosmic, and existential. Crosby is more concerned with the last two types. He cites Schopenhauer and Russell as unlikely bedfellows representing these views. For Schopenhauer, he says, “All striving is rooted in deficiency and need, and thus in pain. Each organized form of nature, including human beings, everywhere encounters resistance to its strivings and must struggle to wrest from its surroundings whatever satisfaction it can achieve” (SA 28). For Russell, the cosmos is alien and inhuman and the values we cherish have no realization in it. We must learn to accept that the natural world is oblivious to all distinctions between good and evil and that it is nothing but an arena of blind forces or powers…that combined by sheer chance in the remote past to effect
conditions conducive to the emergence of life (SA 27).
Whereas Schopenhauer holds that the cosmos has no intelligible structure whatsoever, Russell’s view is less extreme, in that he holds that mathematics and natural science can provide us with an accurate picture of nature, but one that will not include human values. Russell’s universe is rationally knowable but finally meaningless. Cosmic nihilism is then something of an oxymoron, since it means that there is no such thing as a “cosmos” in the sense of an intelligible and moral order in nature that humans can discover and conform to.
From here, it is a short step to existential nihilism. This view has been advanced most pointedly by writers such as Sartre and Camus. Honesty demands that we face the absurdity of our existence and accept our eventual demise; religion and metaphysics are dismissed as happy hedges against death. The mature person accepts all of this and slogs through, manufacturing meaning through projects chosen for no reason. He cannot provide a reason for living, for the particular life he chooses, or for choosing not to live.

Now Löwith, as noted above, saw the rise of existentialism and nihilism as consequences of the collapse of a view of nature as cosmos or creation. Crosby notes the major shift from the medieval to the modern view of nature: “The medieval method made the needs, purposes, and concerns of human beings the key to its interpretation of the universe; the scientific method tended to exclude human beings altogether from its concept of nature, thereby leaving the problem to philosophy of how to find a place for humans in, or in relation to, the natural order” (SA 202). Moreover, whereas the modern method conceived nature as a uniform plane of being, the medieval method “took for granted…the twin notions that the universe was a domain of quality and value, and that it was a hierarchically ordered, pluralistic domains, consisting of fundamentally different levels or grades of being” (SA 203). Moderns of different stripes all accept the former prejudice. The positivist and the existentialist may have quite different views, but they share the presupposition of cosmic nihilism. My point here is that existential nihilism—the type that garners the most attention, both literary and philosophical—is derivative of cosmic nihilism. Here I think Crosby is wrong in claiming that existential nihilism is the primary philosophical type of nihilism. Cosmic nihilism (a view about the status of nature) is more fundamental than existential nihilism (a view about the status of human beings).
B) Sources of Nihilism. Crosby traces many religious and philosophical sources of nihilism through the Western tradition, but here I just want to focus on two of the more general ones, since they bear directly on our conceptions of nature: anthropocentrism and value externalism. Anthropocentrism, he explains, involves the subordination of nature to human beings and stems from the Judeo-Christian assumption that nature must revolve around us: “we humans are either at the pinnacle of a nature regarded as subservient to our needs and concerns, or we are nowhere. Everything in the universe must focus mainly on us and the problems and prospects of our personal existence, or else the universe is meaningless and our lives are drained of purpose” (SA 128). Once these unrealistic expectations are disappointed and we fall back to earth, the alternatives—dualism and materialism—seem unsatisfying. It is as though we had resided so long on a mountaintop that the lowlands came to seem inhospitable. But Crosby points out that our pique at realizing we are not the center of the universe is conditioned by our clinging to anthropocentric views. Hence while Crosby laments the loss in the transition from the medieval to the modern view of nature that I mentioned above, he approves of, e.g., Nietzsche’s critique of the Christian view: “Nietzsche is correct when he claims that the anthropomorphic assumption is a fundamental cause of nihilism. ‘We have measured the value of the world,’ he says, ‘according to the categories that refer to a purely fictitious world…. What we find here is still the hyperbolic naievete of man: positing himself as the meaning and measure of the value of things’” (SA 129). The premodern cosmos is thus criticized as (at least in part) an unwarranted projection of human interests, qualities, and desires. Whitehead shows how this is echoed in the modern period: “The individual subject of experience has been substituted for the total drama of reality. Luther asked, ‘How am I justified?’; modern philosophers have asked, ‘How do I have knowledge?’ The emphasis lies upon the subject of
This brings us to the second source of nihilism, what Crosby calls the “externality of value.” This notion, he says, “requires that we deny that nature has, or can have, any intrinsic significance; it supposes that the only value or importance it may have is that which is externally bestowed” (SA 131). Originally this assumption took root in the Judeo-Christian tradition, the idea that the goodness of nature and natural beings lay in the fact that they were created by God. Later, however, once the cosmos is collapsed and God disappears, humans replace him as the value-bestowers in chief. In conclusion, Crosby thinks that though nihilism has considerable problems as a philosophy—especially its embrace of “false dichotomies” such as “faith in God or existential despair, a human centered world or a meaningless world” –it is a necessary halfway house between untenable modern and premodern philosophies and something new (SA 364). In addition to having a useful debunking function and a laudable emphasis on human freedom, it drives home the “perspectival nature of all knowledge, value, and meaning” (SA 366). When viewed against the backdrop of the Western tradition, perspectivism—such as that of Nietzsche—comes off as a great calamity and a crass relativism. But Crosby submits that this reaction is not necessary: “To be finite and time-bound is no disaster but simply the character of our life in the world. The philosophy of nihilism can help us to acknowledge and accept our finite state by forcing us to give up the age-old dream of attaining a God’s-eye view of things” (SA 366). Though Crosby appears to cast Nietzsche as a nihilist, I think this was precisely Nietzsche’s conviction: that nihilism is a painful but necessary and even salutary stage through which humans come to terms with the interpretive aspect of their view of nature, abandon otherworldly visions, and realize that nature is an ever-evolving complex of perspectives, none of which command a total view of reality. Nihilism opens us up to a “constructivist” view of nature; the difficult part, as Crosby notes, is not lapsing into a radical idealism, where nature is dissolved into a positum of the human subject, precisely Jacobi’s critique of Fichte. But here we just need to note that Crosby, one of the most astute contemporary scholars of nihilism, draws the connection between nihilism and nature.
Michael Gillespie. Michael Gillespie offers perhaps the most revisionist account of nihilism, arguing that its roots can be traced from late medieval nominalism to Descartes’ epistemological revolution, Fichte’s absolute idealism, and the “dark side” of Romanticism. The principle source of the concept, he contends, is the rise of the capricious, voluntaristic, omnipotent God unleashed by nominalism. Long before Nietzsche pronounced the death of God, the seed of nihilism was sown by the birth of the God of nominalism. It was not the weakness of the human will that lead to nihilism, but its apotheosis. According to Gillespie, Nietzsche’s definition of nihilism is actually a reversal of the concept as it was originally understood, and…his solution to nihilism is in fact only a deeper entanglement in the problem of nihilism. Contrary to Nietzsche’s account, nihilism is not the result of the death of God but the consequence of the birth or
rebirth of a different kind of God, an omnipotent God of will who calls into question all of reason and nature and thus overturns all eternal standards of truth and justice, and good and evil. This idea of God came to predominance in the fourteenth century and shattered the medieval synthesis of philosophy and theology…. This new way was in turn the foundation for modernity as the realm of human self-assertion. Nihilism thus has its roots in the very foundations of

Not only is Nietzsche’s diagnosis of the cause of nihilism—the death of God—wrongheaded, but his cure fails because he is unconscious of the prejudices guiding his valorization of the will to power. Nietzsche’s spirituality of the Dionysian overgod-man, try as it might to escape the gravity of Christianity, remains squarely within the ambit of one of its mutations in the transition from the medieval to the modern period. “The Dionysian will to power,” Gillespie writes, “is in fact a further development of the absolute will that first appeared in the nominalist notion of God and became a world-historical force with Fichte’s notion of the absolute I….Nietzsche’s Dionysus…is thus not an alternative to the Christian God but his final and in a sense greatest modern mask” (NBN xxi). Gillespie’s account is, by his own admission, not entirely original in that it is a modification of Heidegger’s view that Nietzsche was merely the crest of the wave of the will that motored modern philosophy from Descartes onward, but his novel claim is that that power was unleashed by the rupture of the medieval cosmos at the hands of the nominalists.
Here, I want to look more closely at a few of the planks in Gillespie’s account in order to highlight the centrality of two themes we have seen again and again throughout this essay: the collapse of the premodern cosmos and the increased focus on subjectivity and the will.

Gillespie contrasts nominalism with the thoroughgoing realism of medieval scholasticism. Though the latter certainly embraced divine omnipotence, this was usually seen as somehow limited by the perfect order of creation which reflected the perfect order of the divine mind. The divine will and the divine intellect are seen asintegrated. The notion of a completely arbitrary and all-powerful divine will would be seen not as a true representation of God’s freedom but as a reflection of fallen, human freedom. Moreover, for realism the divine will is not entirely inscrutable, since it produces an order that can be understood by observing nature, an intelligible cosmos reflecting it. As Gillespie recounts,

The metaphysics of traditional scholasticism is ontologically realist in positing the extramental existence of universals such as species and genera as forms of divine reason known either by divine illumination…or through an investigation of nature, God’s rational creation. Within such an ontology, nature and logic reflect one another…. On this basis, it is possible to grasp the fundamental
truth about human beings and their earthly duties and obligations (NBN 12).

The “loose end” of this realism that the nominalists would exploit, however, is divine omnipotence. “While no one denied God’s potentia absoluta (absolute power),” Gillespie writes, “scholastics generally thought that he had bound himself to a potential ordinate (ordered power) though his own decision. The possibility that God was not bound in this way but was perfectly free and omnipotent was a terrifying possibility that nearly all medieval thinkers were unwilling to accept” (NBN 14). It is the widespread acceptance of this possibility, Gillespie contends, that formed the foundations of modernity and spurred the rise of nihilism. The compound influence of Ockham and others was to normalize what had been a minority view in the medieval period: negative theology, the general notion that the ontological difference between God and humans (and God and nature) is so great that we cannot achieve any positive or analogical knowledge of his nature. The decoupling of human reason and God and the prioritization of divine omnipotence laid the groundwork not only for a new theology focused on revelation and faith alone (instead of natural theology and the complementarity of faith and reason), but a new understanding of nature. As Gillespie notes, “The effect of the notion of divine omnipotence on cosmology was…revolutionary. With the rejection of realism and the assertion of radical individuality, beings could no longer be conceived as members of species of genera with a certain nature or potentiality…. The rejection of formal causes was also the rejection of final causes” (NBN 21). Denied access to God, reason would now be focused squarely on knowing nature in a more precise, certain, and complete way, and in the process, as we saw Rosen describe above, reason itself would undergo a decisive change. Since reason can no longer discover teloi in nature—including the human telos—it loses its normative status, and its sole task is instrumental, and the ends to which it is put are prescribed not by reason itself, but by the will. Gillespie notes that this is the root of Descartes’ project of doubt: “The will as doubt seeks its own negation in science in order to reconstitute itself in a higher and more powerful form for the conquest of the world. Science and understanding in other words become mere tools of the will” (NBN 43). Doubt is undertaken as a security measure needed to protect against a dangerous and unpredictable nature created and unregulated by a capricious God. God and nature can no longer be looked to for practical guidance. Humanity must seek its proper ends within itself.
But since its reason can no longer recognize itself as an instance of a natural kind that fits within an ordered cosmos (in the sense of both intelligible and purposive), its reason cannot do the job, and all that is left is the will. In Gillespie’s view, all of this signals a drastic shift from a model of God as “craftsman” to a vision of God as “artist”:

The nominalist emphasis upon divine omnipotence overturned [the] conception of natural causality and established divine will and efficient causality as preeminent. God was thus no longer seen as the craftsman who models the world on a rational plan, but as an omnipotent poet whose mystically creative freedom foams forth an endless variety of absolutely individual beings…. This ‘cosmos’ is devoid of form and purpose, and the material objects that seem to
exist are in fact mere illusions (NBN 53).

As I mentioned near the start, the first philosophical usage of the term nihilism occurred when F.H. Jacobi alleged that Fichte’s absolute idealism was nihilistic. As Gillespie writes,

In [Fichte’s] interpretation of Kant…it became his goal to break the enslaving chains of the thing-in-itself and develop a system in which freedom was absolute…. Such a system in Fichte’s view could be established only by a metaphysical demonstration of the exclusive causality of freedom, and this in turn could be achieved only by a deduction of the world as a whole from freedom (NBN 76).

Freedom must be conceived not as a mere postulate that must be assumed because of a nature thoroughly determined by efficient causality (i.e., nature according to Kant via Newton), but as the principle of this nature in the first place. Fichte exacerbated the fault line between freedom and necessity broached by nominalism and wedged wider by Descartes: “Nihilism…grows out of the infinite will that Fichte discovers in the thought of Descartes and Kant. Fichte, however, radicalizes this notion of will…transforming the notion of the I into a world creating will” (NBN 66). This world-creating will is not, however, the will of the individual ego, but the source of all manifestation that alienates itself in nature: “Reality is merely a by-product of this creative will that seeks only itself…. The I of the I am is not a thing or a category but the primordial activity which brings forth all things and categories” (NBN 79). Nature is not an independent order: it is a spontaneous, free creation of the will, a negation of the absolute I. For Fichte, the moral struggle of humanity is the story of the I becoming reconciled to itself. Nature is nothing but the obstacle in the finite self’s path toward recollecting its original infinitude; or, put differently, nature is nothing other than an instrument for the perfection of humanity.

In presenting these accounts, I have highlighted their tendency to see the origins and nature of nihilism as tightly bound up with the concept of nature. This was done to bring to light the gamut of influences informing Nietzsche’s and Heidegger’s engagements with the problem of nihilism. The sources are several: Greek metaphysics, Christian theology, late medieval nominalism, modern science, politics and culture, the advent of the philosophy of history, and German Idealism. The diagnoses are different: some see nihilism as a historically contingent phenomenon; some think it is rooted in human nature; and some think it issues from the nature of being itself. What they all have in common, though, is the notion that nihilism has something to do with a disruption in the relationship between humanity and nature, and many of them hold that overcoming or at least attenuating it involves developing a new conception of nature. There must be an alternative, in other words, to the positivism and scientific naturalism that rule the day because such a universe has no place for meaning and value; it offers no ground or justification for human values, and mocks human intuitions about the value of nature. Moreover, a common thread in the accounts is that nihilism involves the emergence of the view that the human will is the source of all meaning and value, and that the latter are in no way discovered but are purely created.

In closing, my hope is that this narrative of the origins, development, and nature of nihilism might serve as a conceptual and historical backdrop for the contemporary project in environmental philosophy to “re-enchant the world” by recovering the meaning, value, and purpose that modern conceptions of nature by and large drained from the world. The search for a new cosmology or an alternative, non-reductive nihilism springs from a recognition of the nihilistic consequences of scientific naturalism.
Fordham University
Collins Hall, Philosophy Dept.
441 East Fordham Road
Bronx, NY 10458

NonDualism & Christianity

Indian Wisdom, Modern Psychology and Christianity.
Part III. Chapter 4
The Non-Dualism Hidden at the Core of Christianity.

In this period of the turn of the millenium, some fundamental metaphysical conceptions such as non-duality are no more limited to their area of origin. Nevertheless, if this notion takes root rather easily among us, it may be that it was expected from inside by the Western tradition and by Christianity.
Non-dualism corresponds to the spiritual paths which do not distinguish God’s substance, or the Absolute, form the created and which affirm they are one. Dualists systems put on top a personal God, non-dualist systems a non-personal absolute. In this sense, dualism is usually associated with the path of devotion, and non-dualism with the path of knowledge. Vedanta, ancient buddhism and Zen are non-dualism. These non-dualist schools influenced modern psychology; there are relationships, for instance, between Zen and Gestalt. In a wider sense, one can discern a non-dualist background in many new emerging movements, from spiritual ecology up to the notion of unified field in physics via Heidegger’s philosophy of being. In this sense, an in-depth review of the potentialities of non-dualism in the West and of its real realationships with Christiantly seems to be proper in this book. We will first evoke a short history of non-duality in the West, and then will focus on the real relationships between non-dualism and Christianity before considering a few possibilities for the future.

Elements of the History of Non-Dualism in the West
Many things have happened since Vivekanands, at the end of the XIXth century, came to the West to speak of non-duality as a possible basis for universal spirituality. Hatha Yoga has become a common practice in the Western nations, even in the countryside. Some movenents inspired by non-duality such as Transcendental Meditation have grown to the dimension of a new religion. In Tanzania, they have been given 25% of the country to develop it both on spiritual and economic lines. In France, there are about two millions Buddhists, according to the latest estimations, which seems more than the practising catholics, whose number is about 2% of the population, i.e., 1200000. Jacques Brosse evoked the relationship between Zen and the West in one of his recent works (1). My friend Jean-Marc Mantel organized a conference on meditation in Jerusalem where delegates of the three religions of the Book talked and where the possibility of non-dual realization of the Absolute and the transcendence of religious barriers by the simplicity of an inner elevated experience was also emphasized. The coming of a non-dual and impersonal spiritual path in this sacred City is a new trend.
In India, Swami Abhishiktananda’s ideas are going their way. I visited in Poona a Christian ashram whose superior, Sara Grant, has been able to write a booklet : ‘Towards an Alternative Theology-Confessions of a Non-dualist Christian.’ (2) Her full-fledged studies of theology at Oxford do not prevent her from defining herself as a non-dualist Christain now. A disciple of Swami Abhishiktananda, Vandana Mataji, who received religious training in the Order of the Sacred Heart, was able to say in the Parliament of Religions in Calcutta in September 1993, to put it in a nutshell : ‘For me, it is hardly essential to know if I am rather a Hindu Christian or a Christian Hindu.’ For this, she received the ovation of an audience consisting of about five thousand people. Father Bede Griffith has also reflected and published on how to reconcile non-duality and Christianity.
This article is not written for those who are in the kindergarten of spirituality, but it is drafted for those who know how to ponder and who, through their evolution, have been able to create an inner distance from the emotional reflexes linked to the devotional or institutional conditioning. These three fields, emotional, devotional and institutional, are usually knotted together, and this very knot is an obstacle to a serene meditation on deep subjects. This should be kept in mind.
As for me, I have been following a Vedantic path for the last nine years that I mainly spent in India. My basic training is Christian and I think I have studied more the mysticism of this path than many active Christians. What I will give in this article are my impressions, my intuitions. I do not think that one can write in this field with the precision of a mathematician. Those professional theologians of the past who appeared to be able to do it seem rather dangerous to me because they freeze the vitality of inner experience and their work can easily be exploited by a centralized power as a penal code to determine which are the lawful ideas and which are not. Having said thus, it is not vital for me to tell what I think since the end of the Yoga I practice is not to think, but to attenuate this talkativeego which is only a small spot on the sun of the Self…
Let us come now to the history of the hidden non-dualityin the West. Since Christianity has imposed itself, and has therefore imposed devotion as the only way of salvation, non-duality and the path of knowledge have been able to discretly survive thanks to the teaching of Platonic and neo-Platonic Philosophy. I say survive, because Christian apologetics have made constant efforts to make people believe that the search for Unity was merely an intellectual process, keeping for Christianity the prerogative of genuine spiritual experience. Actually, the path of knowledge is a complete path is itself, able to transcend intellect as well as devotion can do, and to reach an intensity of being analogous to the union with Jesus. The experience of Vedantic sages of India shows it to us still today. On the other hand, intellectual deviation is possible in the devotional path as well, just correctly expounding the theology of grace does not make one automatically filled with the love which should gush out of this grace.
However, a really mystical non-dualist teaching has been able to be integrated with Christianity thanks to a trick : the translator into Latin of a text which has been strongly influenced by the neo-Platonician Proclus had the good idea to attribute it to the first disciple of Saint Paul in Athens, Denys the Aeropagite (3); consequently, it has been read and meditated upon by most medieval mystics, including Saint Thomas of Aquinas. The rejection of false conceptions one can have of God is an essential element of the path of knowledge and at the same time has deeply influenced eastern Christian mysticism. (4) Another author with a strong non-dualistic tendency, Evagrius Ponticus, was ‘smuggled’ into the Christian tradition for a couple of his texts under the name of Saint Nil of Sinai. The change of attribution was identified by Father Irénée Hausherr. One was reproaching to Evagrius his link with Origen’s thought and equally to have been able to write a whole book on inner life where he did not speak of Jesus.
As to Meister Eckhart, specialists usually make great efforts to bring him back in the direction of the official doctrine under the pretence of putting his ideas again in their context. It is the contrary, though, which seems obvious to me : if, in the heavily dualistic atmosphere of his epoch, Eckhart dares claim non-dual experiences, it means that the latter was fundamental for him. So one should interpret his writings in a not less but more non-dual sense than he has been able to write. It is the same for those other Christian mystics who have let appear non-dual experiences in their texts. If there have been only a few mystics to follow the path of knowldge in the west, it does not mean that they did not need it, but that they were discouraged by the heaviness, the monolithism of an Old Testament-like monotheism.
There have been in the West atypical experiences of the Self which have come out spontaneously among poets and philosphers. Louis Gardet devoted a good hundred pages to this subject (5). Heidegger also acknowledged that his view is common with Zen non-dualism when he discoverd the latter : ‘If I understand well Zen, that is what I tried to say in all my writings.’ (6) He also clearly describes the basis of the path of discrimination when he writes : ‘One should separate the authenticity of Being from the factitious character of existence.’ (7) Nevertheless, one should have much more than metaphysical intuitions to practically establish and transmit a complete spiritual path. Another example of non-dual intuition can be discerned in Camus’ book ‘The Plague’ :‘Can one be saint without God?’ That is the only concrete problem that I know today.’ (8)
In India, the unity of substance between man and the Absolute is so natural that the same is used to designate both : for’ ‘atman’ means both ‘self’ and ‘the Self’ (there are no capitals in Sanskrit). The leading thread of this path of knowledge is the ancient questionning of the Upanishads : ‘What is this knowledge by which everything can be known?’ To put it differently, that comes to affirm that there exists an experience through which Homo sapiens can become ‘fully sapiens’, a crowning of consciousness at the level of individual experience when it is becoming universal. In India, it is widely acknowledge that devotion which reaches its peak (parabhakti) is one and the same with knowledge (jnana); this idea could inspire Christian non-dualists.
The very title of my article echoes Raimundo Pannikar’s book: ‘The Unknown Christ of Hinduism’ (9). He has taken back Justinius the Martyr’s idea regarding Christ disseminated at the core of pagan religions and he tried to show the presence of an ‘unconscious Christ’ in Hinduism. Nevertheless, I have the impression that my task is easier than his. For non-dualism refers to an individual experience, but which is unconditionned since coming after the rejection of conditionings, while the word ‘Christ’ is bound to refer to Jesus, a personage which lived in a context very different from India.
To my eyes, the role of comparative mysticism is to feel the weight of one’s own cultural a priori which is almost impossible to see directly : one needs a mirror to observe his own face. For that, one requires a fundamental sympathy which is that of a genuine searcher for truth; ‘Non entratur in veritatem, nisi per charitatem’ : ’One does not enter truth , if not through charity’ (Saint Augustin, 10).
I find that the richness of our time is the plurality of religious groups; for instance I feel that the birth of an advaitic ashram in Rome is a sign of the times. Rafael, who started this center, teaches not only Shankaracharya but Plotinus as well, in its direct mystical significance and revives a tradition of religious pluralism which had been eclipsed for fifteen centuries in this capital. The more groups there are, the more chances people will have to find the path which actually suits them; and their interaction will be stimulating : from the rubbing of two stones springs the spark of consciousness. It is true that a religious monopoly can bring a seeming peace, but it may be the peace of the graveyard.
We will now consider the relationships of non-dualism with Christianity. We will first review several divergences which are sometimes presented as essential, but which will appear to us superficial after some reflection. Afterwards, we will see a few deeper differences. There is a famous Zen koan which says : ‘What is the significance of the coming of the Patriarch (Bodhidharma) from the West?’ Perhaps one will find in this article elements of an answer to this new possible koan in this turn of the millenium : ‘What is the significance of the coming of non-dualism from the East?’

Non-Dualism and Christianity : Twelve Points of ‘Parallel Divergences’
By this paradoxical designation of ‘parallel divergences’, I gather here twelve points where the differences between non-dualism and Christianity seem more due to misunderstandings than to irreconciable oppositions. This will also give an opportunity to discard a couple of specious arguments advanced by a few Christian theologians ignoring most of the philosophy and practice of non-duality. One should realize that in India, most people follow a devotional, therefore dualistic religion, but the already ancient interaction with the non-dualistic conceptions enables spiritual people and sages to readily pass from one to another; this contributes to the fecundity and vitality of Hindu thought and religious practices. Let us come to the various points of objections.
1) ‘Non-duality is a vague doctrine’, this idea is often heard. For instance, it could be the most prefered basis for drug-addicted people to interpret their experiences, etc… There are several answers to this : first, the ‘vague’ is often only in the mind of the Christian theologians who have acquired a superficial knowlwdge of a few non-dualist ideas, most often through the writings of other Christians and who do not have any experience of meditative practice corresponding to this path of knowledge. Fortunatly, there are exceptions who become more frequent nowadays, but not without tension with the rest of the Christian community. Non-dualists like Shankaracharya with advaita vedanta or Nagarjuna with Madyamika Buddhism have established philosophical systems whose coherence does not fall short form that of a Saint Thomas of Aquinas. It is true that most mystics do not like to imprison their experiences in rigid and overdetailed systems. Jesus, Buddha, do not have elaborated complex philosophies supposed to answer every question in detail. The Desert Fathers do not have a very explicit theology, but the radiance of their advice inspires us till now.
The notion of enstasis-‘staying inside’-(a word used by Mircea Eliade while speaking of Yoga) is not more hazy than that of ecstasis. On the contrary, one can note that the notion of ecstasis surmises the union with a God whose existence has ever been difficult to prove while enstasis only requires a return inside, and everyone can have a direct perception of what inside can mean. The term ‘enstasis’ indeed does not seem so well fitting : the non-dualist meditator searches and experience of the whole which abolishes differences between inside and outside, between enstasis and extasis; therefore, one could instead call this alternate state of consciousness ‘holostasis’.
2) ‘The non-dual experience of nirvana is a state of torpor which results in no real change in the individual’.
There are two important distinctions to make : first between the experience of ‘snoozing’ during a spiritual practice.This is called by M  Anandam yee ‘shyunya’ and the true experience of emptiness, ‘Mahashunya’. The other distinction is between a temporary dissolution of the mind and the ego (manolaya) and its definitive destruction (manonasha). The first is more or less effective according to its depth, but the second corresponds to the great experience which is definitive if we consider the benefit it provides. One can not even speak of transformation of ego, since before there was an ego, but after there is no more. In the path of devotion as well, every love experience is not transforming : it depends upon its authenticity and its depth.
3) ‘Non-duality teaches a truth for a select few, while dualism is a democratic teaching for the masses.’
Certainly, in Vedanta, there is a distinction between empirical truth (Vyavaharika) and absolute truth (paramartha). Nagarjuna also speaks of the two truths (satyadvaya) and in Japanese Buddhism one speak of ‘provisional law’ versus ‘definitive law’. I think this is a concrete attitude which respects the differences of level between people and which allows us to integrate various spiritual paths by simply hierchising them, not by choosing one and destroying the other. In this, there is no question of sociological discrimination : everyone is allowed to experience the absolute truth, but he will have to make an effort which few want to do. There is no possibility of non-dualism without this practical distinction between the two truths. Christ himself respected this distinction : if not, why has he not organised the Last Supper in the Temple courtyard, or has he not appeared to the crowds after his Resurrection?
The concept of rational unity between all the levels of inner development is an idol, and one should stop sacrifying to it. It is an attempt towards uniformization which hinders both beginners, who would like for instance to use violent trance to communicate with God, and the advanced mystics who emphasize upon knowledge and the spontaneous cessation of the sense of ‘I am the doer’. The real problem is that hierachy is afraid not to understand well the ‘definitive law’ and not to be able to check those who follow it naturally. Buddha tells the following story :
“Two brothers go the mountain to cut wood and come back heavily loaded. All of a sudden, the younger sees a big heap of copper coins and drops his wood to take as many coins as he can. The elder thinks : ‘I have worked so hard for this wood that I will not loose it. I will come back afterwards to take the coins.’ Further down on the path, the younger brother sees silver coins which he takes instead of the copper ones, while the elder remains attached to his wood which he had gathered by the sweat of his brow. Later, the same occurs again with golden coins. When afterwards the elder comes back to take the coins, they had disappeared.’ (11).
4) ‘Non-dualism is a doctrine which is cold and devoid of love because it does not acknowledge the supreme value of the human person.’
Here is an essential question and we will develop it in more detail. It may have been in the thirties that personalism asserted itself most; in Judaism, there has been Martin Buber’s book ‘I and You’ and in Catholicism, the foundation of the Journal ‘Esprit’ (‘Spirit’ or ‘Mind’) with especially Mounier and Berdiaev. The historical background of this epoch was rather ominous : democracies had paled before the ascent of totalitarisms and, actually, it was urgent to pass onto the crowds tempted by the over-simplification of mass movements that the human person was inalienable. By listening or reading certain western authors, one has the impression that the average Indian should be half schizophrenic under the pretext that he has not the notion of external person. For those who have lived in India, this idea appears of course fanciful. The ordinary Hindu has a personality and an ego as everyone. He may care more than westerners to be in harmony with his family and his clan (Gotra), and this is, to their eyes, a sign of psychological maturity. One who wants to be independent, which means to remain alone far from the family is seen as a kind of asocial element, as a failure; but for the spiritual life exists the possibility of renunciation where one cuts the links with the family. In this sense, it is a strong process of individuation, but which does not stop there, because it continues by a new widening, that of an opening to the Universal Consciousness, whatever name can be given to it.
Upon close scrutiny, the notion of the Christian person, difficult to clearly distinguish from the individual, is rather hazy. It is beyond the usual ego, it is ‘pure presence’, it is ‘strictly ineffable like the divine person’ (12) : one may wonder what is left of the person, except the result of a kind of pure act of faith claiming that the person must continue to exist. Lossky says : ‘by renouncing his own contents, by giving them freely, by stopping its existence for himself, the person manifests fully in the unified nature of all. By renouncing his particular good, he dilates infinitly and gets enriched by everything in everybody.’ (13) That describes exactly ego dissolution in the vedantic experience, and this process has hardly any reason to save the person as such.
Obviously, the belief in doomsday obliges one to keep a sort of shadow of individuality who can answer ‘present’ to the last call. Likewise, in Hinduism, there is something of the person, or of the ego, which passes from one life to another to convey individual karma; but Hinduism also acknowledges that beyond that, full Liberation is possible : then, karma and person dissolve in the Self. The candle light disappears into the sun, the process follows its logic until the end. Personalist theologians are so much attached to their idea that they feel obliged to correct the Fathers themselves: it is amusing to note that one of them, quoting Gregory of Nysse, ‘The concepts create God’s idols, the enraptured only feels something,’ needs to correct ‘rather someone’. (14) Gregory of Nysse, as a full-fledged mystic, had the intuition of the ultimately impersonal character of the Absolute. That is why he has said ‘something’ like the ‘tat’ (That) of the Upanishads when they evoke the Supreme. This annoyed those theologians who have less elevated experiences and who run after the person as someone could run after his own shadow, hoping one day to grasp it.
If Christ has become nothingness, has emptied himself (‘kenosis’ in Phil II-7), why could other human persons not do the same ? Should it not be the least to do ? This represents a logical process. Is it not written : ‘If the grain of wheat which fell into earth does not die, it will not bear fruit. ? Can we say, in thruth, what remains of the grain after it dies? Science itself, following its recent discoveries in neuropsychology, questions the notion of person and comes to a rather Buddhist concept of ‘agregates’ whose impermanent association gives the impression of personality. (15) This should not lead to nihilism or weakening. Plotinus says : ‘The man who denies his own individuality does not lessen himself but on the contrary grows to the dimension of universal reality.’ Recently, another exponent of the path of knowledge, Nisargadatta Maharaj, said : ‘The highest charity is to give the consciousness of ‘I am’.’ An idea which underlies the insistence on the notion of person is, ‘There is happines only in relation’. Hindu dualists take the simile of a lump of sugar : one must be different from the lump of sugar to be able to enjoy its taste : but it is less respectful of the Absolute to consider it as a lump of sugar and to want to make it an object of tasting. That leads us to speak of the question of anthropomorphism in dualism. The descriptions of the union to God as a marriage or as intercourse betwen lover and beloved are no more satisfying after a certain level of evolution. They sound too much like the projection of an unpurified desire, and psychiatrists rightly note that one gets delirium according to one’s desire. The dualists’ idea according to which we will experience more and more of God’s love and that indefinitely, sounds to me very anthropomorphic. This is the wish of lovers, but reality seems rather different. When we read more and more books, we know more and more things, so, by analogy, when we make more and more spiritual pratices, we should get more and more results. But can we reduce spiritual path to a geometrical progression? Can we not defintitely be attracted by the Eastern notion of sudden change of level in counsciousness? Is it not necessary beyond a certain level, just as the passage from Newtonian physics to Relativity has been needed to integrate new experiments? Strange enough, by going beyond the anthropomorphic notion of person, one comes back closer to man and to his direct experience. Nisragadatta Maharaj says : ‘There is no other God that this sense of presence.’ (16) Meister Eckhart boldly affirms in a famous sentence : ‘If I were not, God would not be either.’ (17)
What matters really is not the projection of desires, like laser into superior mists, but the letting of those desires, their complete giving up, in order that ‘That’ could reveal itself. This is not easy : that is why so many spiritual disciplines have been evolved. Personalists criticize non-dualism by saying, ‘Non-dualism employs techniques, dualism rests on love only.’ The first reflection which comes to my mind is that love is a technique too, or an ‘art’ if the term ‘technique’ makes one afraid. He who follows the path of devotion must gradually learn to play with his emotions, and not to be played by them in order to be able to fully direct them towards the Divine. Vedanta is not against Yoga as a practice of purifying the mind, but it repetitively underlines that ‘That’ reveals itself freely. The Realization of the Self is beyond the meditations which aim to unite us to a given divine form (upasana).
Dualism has a tendency to harden, to thicken the ego by granting it a substance of its own different from divine substance. One can wonder, from the viewpoint of depth psychology, if a relationship does not exist between dualism, the violently affirmed transcendence of Judaism and Islam (see ‘Qul Allahu Akbar…’ Say : ‘God is the Highest’ in the daily Muslim prayer) and the trauma of circumcision. This agressive occurence, even if it appears at different ages in the two religions, may represent a cut in the world of primeval unity. Is not this cut so effective because it acts on sexual force, because it creates an awakening of this inner energy which India calls Kundalini? By the way, this could be a theme of reflection for those interested.
One cannot speak of going beyond the person with precision if one does not clearly distinguish two exits from the ego, upwards and downwards. The latter can correspond to schizophrenia which is a state below the ego or in a more attenuated way, to a kind of modern thought which is reductionist, even nihilistic, and that Jean Wahl had aptly called ‘trans-descendance’. That is why I would prefer to speak of ‘transpersonalization’ rather than ‘depersonalisation’ which has a pejorative, even pathological undertone. Likewise, it seems to me better to speak of the path which liberates the person, which is ‘transpersonal’ regarding the path of knowledge, rather than speaking of ‘impersonal’ path, a word which is associated for most people with coldness and rejection. Speaking of this, one should note that the transmission of so-called ‘impersonal way’ like Vedanta or Zen is made in a very personal way, between master and disciple, through a vital relationship which extends for years. (16) In this sense, this relationship is less impersonal than the institutional transmission which is the most common in Christianity. A true practicioner of the path of knowledge has an intense devotion for the Being. He practices in this direction. Nisrgadatta Maharaj speaks of the ‘explosive illumination of the ‘I am” (20).
Jacques Maritain, who was a srict Thomist, tries to make an opposition between the path of devotion and that of knowledge : the first is meant to correspond to the mystic of fire, and the second to that of mirror. The simile is obviously pregnant with apologetic undertones, insinuting that the path of knowledge is frozen, and that is shows nothing else than his own face, which means the most external aspect of ego. First, one should note, fire fulfills it’s vocation only when the fuel has been completely consumed, which means when duality is consumed and only unity remains. Besides, when it is intense, the mysticism of knowledge is more than a fire, it is a laser which separates the Real from the unreal. On the other hand, those who need to rest on God with human face-therefore who looks astonishingly like them may be more like the mystic of mirror than those who directly dive in the formless Absolute. Is the other which has the same aspect with us really ‘other’?
Which sort of images evokes Saint Augustine’s famous ‘Deus intimior meo’ ‘God more intimate to myself than myself’. One could glimpse a kind of ultimate center under the various layers of personality and conditionings, or else, a ground of the human being which widens more and more by opening oneself into the Being. Both representations, the grain of conciousness or the limitless space,are regularly used to evoke the Absolute in Vedanta. (21).
A last objection to impersonal non-dualism is that one who does not believe in the person cannot respect him. It seems to me that, on the contrary, more so, since he is no longer prevented by the filter of his own person to appreciate the other objectivity, as he is. Only when two persons, which means practically two egos, are related, there are manipulations and conflicts.
The notion of person, in its concrete form of personality, is of course useful in the field of education so that adolescents and children could assert themselves as such. It is equally useful from the sociological viewpoint, in order for the individual not to be crushed by the mass or by the state machinery. More deeply, this notion is closely related with physical love, the smallest physical detail being invested by the lovers with strong emotions. For a mature mystic, this notion of person, which at the beginning had been a help, becomes an obstacle.
In conclusion the idea arises of ‘personalist stage’ : the human person is the very type of the empirical, provisional (vyavaharika) truth which disappears when one comes close to the Absolute, like the moth in a candle flame. Inasmuch as one believes in it, one is submissive to it, one is dominated by it. When one starts questioning it, the sun of Realization begins dawning. Non-dualism does not contradict any doctrine, it gives all of them their place instead and integrates them in its worldview. Those who still keep in their mind, as an aftermath, that ‘personal’ means life and ‘beyond personal’ means death, can meditate on this Zen koan for a long time : ‘The living enters the coffin, and the death carries it.’
5) One often objects to non-dualism that it is coldly technical, that it does not have the notion of gratuity because it does not have the concept of grace.
Even in Yoga which is the most technical part of Hindu spirituality, the notion of grace is present. Patanjali’s Yoga-sutras speak of ‘surrender to the Lord’ (ishwara-pranidhana, 1-23) as a possible way to reach the Absolute. Vedanta, being non-dualist, does not have the notion of God’s grace, but strongly emphasizes Guru’s grace and the notion of gratuity. Knowledge reveals itself on its own, without being the automatic reward of our spiritual endeavours. It is on this very point that Vedanta has differenciated itself fom Purva Mimansa, the school which imediately preceeded it. In Mahayana Buddhism, the notion of ‘spontaneous realization’ is equally at the forefront.
Christian prayer accepts the notion of the automatic efficacy of prayer up to a certain extent, and in this resembles Yoga : ‘Knock, and it shall be open.’ This efficacy not only depends upon God’s good will but also exists in spite of his reticence, as we can see in the parable of the bothered man who ends us yielding to the beggar to get peace. The Orthodox, unlike Catholics, even acknoweldge that grace has been wholly given, and that from the beginning of humanity. This grace is paradoxical, as it is well said in the apocryphal words of Saint Ignatius of Loyola : ‘One should act as if everything depended upon man and nothing upon God, and one should trust God as if everything depended upon God and nothing upon man.’ (22) One can interpret the proposition of theology : ‘Man is deified through grace, and not through nature’, by seeing under the term ‘nature’ the ego, for this seems to be his own nature for the ordinary man : ‘I am myself, and nothing else’. Then, this affirmation is not different from what Vedanta says : the force which transforms the ego comes from beyond the ego, from the Self in non-dual language.
The dependency of Christians upon God’s grace easily gives a tragic dimension to their spiritual life, inasmuch as they cannot eventually grasp why and how it comes. Philaret of Moscow says for instance : ‘Man is suspended between two abysses, like on a diamond bridge which is God’s will; above him is the abyss of divine darkness towards which he is called, below is the abyss of non-existence, from which he has been extracted and to where he can only fall if he renounces his vocation, without being able, though, to ever come back to pure non-existence.’ (23) One who follows the path of dualism has not only a bet to make, that of God’s existence, he must also bet that he will go to paradise and not to hell..From the viewpoint of spiritual psychology, the sense of absurdity and existential angusish may be the other side, the ‘shadow’, in the Jungian sense of the term, of the belief in the grace of a God who is completly good.
The question of grace is linked to that of transcendence and immanence. One often reproaches non-dualism for falling into complete ‘immanentism’, into pantheism by saying that the Absolute, the world and man are only one. It is true, non-dualism has no difficulty to integrate pantheism as a stage of spiritual development. It does not feel obliged to violently reject it as monotheism does. However, the very movement of non-dualism is transcendent and apophatic : it is the ‘neti, neti’ (not this, not this) of the Upanishads. It is interesting to know, by the way, that one speaks of non-dualism and not of monism, to be able to keep its character of ineffability to the Absolute, beyond the pairs of opposites, beyond even being and non-being. In Christianity, except perhaps for a few mystics, one is rather afraid to question, to go beyond the very being of God. In this sense, non-dualism is more transcendent than dualism.
7) Another frequent objection against non-dualists is that they would neither be able of action in the world, nor of demonstrating a scientific mind since they consider that the body as well as the world are illusions.
Let us start speaking of the body. Christians repeat that Incarnation only can give the body its ultimate dignity by returning it to its primeval vocation of temple of the Spirit. One should first say that, for non-dualists, body is not only the temple of the Spirit, but is is Spirit itself, since there is only one ‘substance – Spirit’ which is the ground of everything. In non-dualism, one acknowledges that the mind is based on the body, and it is repeated that we are lucky to have had a rebirth in an human body, and we should not waste it. We have seen that Vedanta accepts Yogic practices as a means to purify the mind. While speaking of this, it is interesting to note that it is precisely in the non-dualistic atmosphere of India that body techniques aiming to the spiritual and which are gathered under the general term of Yoga, are more developed : it may be because a good number of Hindus felt the limitations of the exclusive explanation of spiritual progress through God’s grace.
Indeed, affirming the reality of the body is common sense : the ordinary man is convinced of the reality of the body, and I am sure that if one could speak of metaphysics with animals, they would equally support this reality. Doubt regarding this basic fact comes from the skillfulness of human counsciousness to progress; the desequilibrium of doubt is the step of thought. This disengagement from body consciousness is not a question of body asceticism, but of understanding. In this sense, Christian ascetics, with their macerations, seem to have paid less respect for the body than neo-Platonicians who used to say that the latter was not so important vis-a-vis consciousness, but at least would not torture and martyrize it. I have more closely examined this question in the chapters 2 and 3 of this part.
Let us come now to the second point of the objection : ‘the unreality of the world according to non-dualism’. Certainly, there have been Buddhist schools which have supported the complete irreality of phenomena, like Vijnanavada. However, in Vedanta, the manifested world, maya, is described as neither real nor unreal, in practice challenging any description (anirvachaniya). Additionally, the world is unreal vis-a-vis the Absolute, (paramartha), but is has relative reality (vyavaharika), as we saw above when we spoke of the two truths.
If we want an example of the capacity of action of Vedantins, we may mention the Ramakrishna Mission inspired by Vivekananda’s ideas on practical Vedanta, which is said to be the biggest humanitarian organisation in the world. Although it is mainly spread in India, one should remember that India represents almost a billion inhabitants nowadays. This fact shows well that Vedanta and action are not incompatible.
On the other hand, it seems that Western science has not developed because of the Church, but in spite of it. Controversies on evolution which still continue today seem hardly to have troubled modern Hindu thought. The idea of an impersonal Self seems more readily assimilable to the notion of unified field developed by contemporary physics than a personal, creator God, ‘Deus ex machina’ of dualist doctrines. Scientists had to put aside this conception of God to be able to evolve. Every really religious man considers that God is in the world and that he is more real than him; if not, he is only a materialist who mayf ollow a few rituals from time to time. Nil of Sinai says : ‘Blessed is the monk who sees every man as God after God.’ (24) and also : ‘The monk is one who, while retiring from the midst of man, is united to them and sees himself in every body. (25)
What is important is not to give up the world, but to give up one’s ordinary conception of the world, to see God in it, to deify it as the Orthodox say, or as is written in the beginning of the Isha Upanishad : ‘By the Lord (Isha) enveloped must this all be whatever moving thing there is in the moving world. With this renounced, thou mayest enjoy…’ (Hume’s translation). Abbot Alonios had this non-dualistic intuition of the disappearance of our ordinary conception of the world thanks to the stoppage of the mind : “If man does not say in his heart, ‘there is in the world only myself and God’, he will not obtain rest.” (26) Isaac the Syrian, though his direct experience, had equally reached a conception close to the ‘creation by seeing’ (drishti shrishti) of Hindu thought : ‘The world dies where the current of passions stop.’ (27)
Saint Thomas acknowledges that the world may not exist when he writes : ‘It may be that everything which is not God does not exist.’ (28) Gregory of Nysse sensed the paradox of a world which is both real and unreal when he said : ‘The paradox of the world is to have its existence in non-existence.’ (29) Meister Eckhart does not hesitate to affirm the ultimate unreality of the creation as an obvious fact : ‘All creatures are pure nothingness. I do not say that they are minute or that they are something : they are pure nothingness. What has no being is nothingness. All the creatures have no being because their being depends upon God’s presence. The only difference between Christian classical theology and Vedanta is that the first says that man has lost his state of deification and must find it again, while the second says that man only believes he has lost it.
One can criticize the Vedantic notion of the world as ‘anivachaniya’, beyond any description. Is this not an easy solution, an escape which leaves the problem unsolved? This is true. But every metaphysics and theology leaves certain problems unresolved. Christians themselves acknowledge that ex nihilo creation (creation from nothing) is inexplicable. How has God been able to descend from the intemporal to the temporal to accomplish the act of creation ? If he is Almighty, how can we explain man’s freedom ? Out of love, they say. But if God is really complete, how is it that He needs love? And if man was perfect before the fall, is it not illogical that he chose the evil? Globally, the problem of evil is more difficult to solve in a dualist system where the world has been created by a compassionate God. For non-dualists, evil does not come from sin, but from ignorance, from maya which has no beginning, but has an end at the time of liberation. This conception of evil as ignorance is more in keeping with the spirit of modern psychology than that of evil as a fault.
The notion of an elected people which seems granted if seen from inside Judaism and Christianity will rather be a matter of scandal seen from outside. This God who has chosen a few percent of humanity to make them the elected people and who has rejected the others, if not to hell, but at least to a lower level, seem to be more an inferior being than a God in the eyes of these ‘others’. In this sense, these metaphysics which refrain to refer to an elected people and a personal God certainly represent a progress towards a possibility of real universal tolerance.
In India, one has tried totally pure non-dualism (advaita vedanta) as well as Vaishnavite dualism : Ramanuja created qualified non-dualism (vishishta advaita) trying to reconcile certain Vaishnavite religious beliefs with non-dualism. Most historians of philosophy confess that his attempt has been shabby, not because he was a bad thinker, but because his very initial aim was trying to put a circle square. (31) Just as Christian mystics did not like so much mitigated monastic rules, mystics of contemporary India hardly refer to mitigated (qualified) non-dualism when they speak of Vedanta, but rather to Advaita.
A last objection which has been often put by theologians regarding non-dualism is that it does not have the benefit of a revealed Word, by the very fact it does not acknowledge a personal God. Moreover, non-dualism would not have the possibility of a progressive revelation of God in history.
First, one should say that in Hinduism, the Vedas, including the great words (mahavakyas) of Upanishads are considered as revealed to ancient sages (rishis). They have not heard them from a personal God, as the Prophets did, but they have ‘seen’ them directly : it is the very meaning of the word ‘rishi’, the seer.
Non-duality has no special difficulty to accept the notion of evolution. This evolution take place within empirical truth and does not change anything of the Absolute. On the other hand non-dualism does not believe that our world goes towards a paradise on earth or that God reveals Himself more and more in it. There are indeed clear improvements in certain fields, but regressions also occur in others. People’s minds are more refined, but at the same time prone to suffer for more subtle motives. The very notion of a wholly compassionate God makes still more striking the scandal of evil. The belief that happiness on earth will follow a continuously growing graph like the industrial production of a country which develops well sounds a bit too much like ‘metaphysics of the industrial revolution’. The non-dualist deos not need it to motivate his work on himself or at the service of others.

Heidegger And Zen West_Philosophy_

“Zen in Heidegger’s Way”
David Storey, Professor of Philosophy, Boston College

Abstract: I argue that historical and comparative analyses of Heidegger and Zen Buddhism are motivated by three simple ideas: 1) Zen is uncompromisingly non-metaphysical; 2) its discourse is poetic and non-rational; and 3) it aims to provoke a radical transformation in the individual, not to provide a theoretical proof or demonstration of theses about the mind and/or the world. To sketch this picture of Heidegger’s thought, I draw on the two texts from his later work that command the most attention from commentator’s seeking resonance with Zen, and discuss how his treatments of death, fallenness, facticity, and temporality in Being and Time square with Zen philosophy. Finally, I critique Heidegger’s ambivalence about the possibility of overcoming language barriers and reticence to prescribe concrete practices aimed at triggering the profound shift in thinking he clearly believed Western culture to be so desperately in need of.
In the introduction to an edition of essays by D.T Suzuki, the foremost ambassador of Zen Buddhism to the intellectual West, William Barrett mentions an anecdote that has generated a significant amount of scholarship about Heidegger’s connection to Buddhism. Barrett reports: “A German friend of Heidegger told me that one day when he visited Heidegger he found him reading one of Suzuki’s books; ‘If I understand this man correctly,’ Heidegger remarked, ‘this is what I have been trying to say in all my writings’”i (Barrett, 1956, xi). The truth of this story is unverifiable and irrelevant, but Barrett considers its moral undeniable:
For what is Heidegger’s final message but that Western philosophy is a great error, the result of the dichotomizing intellect that has cut man off from unity with Being itself and from his own being….Heidegger repeatedly tells us that this tradition of the West has come to the end of its cycle; and as he says this, one can only gather that he himself has already stepped beyond that tradition. Into the tradition of the Orient? I should say he has come pretty close to Zen (Barrett, 1956, xii).ii

In the spirit of this controversial claim, and in light of a host of similar and possibly apocryphal anecdotes, many scholars have undertaken historical and comparative analyses of Heidegger and Asian
philosophy (especially Taoism and Zen Buddhism) apparently on the gamble that where there is smoke, there is fire. The existence of this “fire” is predicated, I submit, on three simple ideas: 1) Taoism and Zen are uncompromisingly non-metaphysicaliii; 2) their discourses are highly poetic and decidedly non-rational; and 3) they aim to provoke a radical transformation in the individual that forever alters his comportment toward himself, others, and the world, not to provide a theoretical proof or demonstration of theses about the mind and/or the world. In this essay I will focus specifically on what role, if any, the Zen tradition plays in Heidegger’s early and later thought, with occasional references to Taoist themes.
The exploration of the nature of the Heidegger-Buddhism connection project has, roughly, taken at least one of two paths: influenceiv or resonance. While the hunt for an esoteric reading of any thinker is at best dangerous and at worst foolish, we are obligated to approach Heidegger armed with his own hermeneutical principle of retrieve, which William Richardson describes thus: “to retrieve, which is to say what an author did not say, could not say, but somehow made manifest” (Richardson, 2003, 159).v Dismissing the question of influence as moot and judging the evidence to be either indirect, inconclusive, or non-existent, commentators such as Graham Parkes have instead argued for a “pre-established harmony” between Heidegger’s thought as a whole and core tenets of Taoist and Buddhist philosophy. This claim presupposes the accuracy of William Richardson’s thesis that Heidegger’s works constitute a coherent, unified whole–a thesis verified by Heidegger Fashioning Being and Time as the last hurrah of metaphysics, the project whose residual metaphysics Heidegger came to recant, the argument for pre-established harmony sees in the existential analytic the fledgling formulations of a notion of selfhood and world that is quite alien to the Western tradition and rather congenial to Eastern thinking, a notion perhaps best described as nonduality. This residual metaphysics is repeated throughout Heidegger’s works along the lines of the ontological difference between Being and beings, and constitutes an ambivalence over which scholars are still squabbling. This ambivalence, I hope to demonstrate, is demonstrated by Heidegger’s reticence to prescribe any concrete practices for triggering the radical shift in thinking he labored to galvanize. Heidegger appears to warn us that blithely attempting to step outside of and transcend one’s tradition, situation, and heritage, a prospect so tempting and even advantageous in today’s world, might very well land us in even greater inauthentic peril than we were beforehand. However, by circumscribing the limits of his tradition and designating which practices are off limits and which are not, Heidegger, I argue, ultimately reifies “the West.”
In other words, neither the branches of the Western Enlightenment (Rationalism from Descartes to Hegel and Romanticism from Rousseau to Nietzsche) nor the roots of Greek philosophy provided Heidegger with what he was looking for, and I suggest that Asian philosophy in general and Zen in particular offer a corrective in the way of praxis to the very lopsidedness of theoria that Heidegger labored to amend. To sketch this picture of Heidegger’s thought, I briefly point out texts from both his early and later work that recommend comparison with key issues in Zen. First, I will draw on the two texts from Heidegger’s later work that command the most attention from commentator’s seeking for Eastern resonance. Second, I discuss how Heidegger’s treatment of death, fallenness, facticity, and temporality in SZ squares with Zen philosophy. Finally, I submit a critique of Heidegger’s aforementioned ambivalence about the possibility of overcoming language barriers and reticence to prescribe concrete practices aimed at triggering the profound shift in thinking he clearly believed Western culture to be so desperately in need of.
1. Two Dialogues
A. The Nature of Thinking: “Conversation on a Country Path about Thinking”vii
It is easy to plumb Heidegger’s later works and cherry pick passages that could have been plucked straight from the Tao Te Ching. The subtle, poetic flavor of this primary work of Chinese Taoism easily lends itself to later Heidegger’s notion of “poetic dwelling.” Since both Taoism and Zen operate from a decidedly non-metaphysical comportment, and prefer poetic and paradoxical forms of expression that intentionally thwart logical analysis and discursive reasoning, it is easy to see why many scholars have been struck by their similarity to later Heidegger’s experiments with language. Indeed, Otto Pöggeler, one of Heidegger’s most able and respected German commentators, charges that the Tao Te Ching played a crucial role in the development of Heidegger’s later thought.viii
Be that as it may (or may not), the stylistic similarities between two thinkers or two philosophical systems can all too easily seduce us into passing over the real and irrevocable differences that force them apart. This is especially dangerous in Heidegger’s case, since the recurrent character of his later attempts at reformulating the question of Being are aimed precisely at unseating the very notion of there being a master narrative, a complete system, a coherent body of doctrine. As David Loy observes: “It is not possible to discuss Heidegger’s system because, like Nagarjuna, he has none. For Heidegger thinking is not a means to gain knowledge but both the path and the destination” (Loy, 1988, 164).ix All is always already way, and that seems to be all that we are allowed to say about the matter—there can be no calculation or meaningful organization, sequence, or pattern to the various way-stations, moments, or thoughts that occur along the way. Reflecting on one of his own “moments”—Being and Time itself–Heidegger remarked: “I have forsaken an earlier position, not to exchange it for another, but because even the former position was only a pause on the way. What lasts in thinking is the way” (Dialogue, 1971, 12).x Compare D.T. Suzuki:
All Zen’s outward manifestations or demonstrations must never be regarded as final. They just indicate the way where to look for facts. Therefore these indicators are important, we cannot do well without them. But once caught in them, which are like entangling meshes, we are doomed; for Zen can never be comprehended (Barrett, 1956, 21).xi                                                                                                                                                   The Zen analogue to Heidegger’s notion of “preoccupation with beings” (CP) or “entanglement” (SZ) is tanha, popularly translated somewhat misleadingly as “desire.” A more proper rendering would be “attachment” or “clinging” to phenomena. To seize upon the flux and freeze Being/Tao in its tracks, to attempt to master, fix, or cling to it with language or logic, is, Heidegger believes, the mistake and mis-calculation of Western metaphysics. Being just sort of “does its own thing,” and we are inexorably caught up in its sway. Our best bet is to release ourselves to this Being-process, not in the sense of demurring or “giving way” to it, but offering or ourselves up to it as servants.
Two of later Heidegger’s works stand out due to their formal character: the CP and the DL. The dialogue is an ideal site for interrogating and pinning down the core of Heidegger’s later thought, and thus apprehending what kinship it may have with Taoist and Zen thought, because it is flexible enough to contain both rational and poetic discourse. That is, it suffers neither from the constraints of monologic—the metaphysics of subjectivity (inaugurated by Descartes and repeated by Sartre) laced within SZ that Heidegger eventually came to recant—nor from the vagary of poetic saying, yet provides a space in which both can have their say. Peter Kreeft usefully qualifies this as “a highly disciplined, exacting kind of poetry,” a kind of saying that, Heidegger thinks, is more rigorous than and indeed makes possible rational discourse itself (Kreeft, 1971, 521).xii In this section, I draw on these two dialogues in order to show the congruence of Heidegger’s later thought with some basic Zen tenets.
The CP is held between a scientist, a scholar, and a teacher. These three figures speak, respectively, for three basic comportments toward or from Being. The first is the Dasein who is blind to the phenomenon of the world. This is the objectifying stance criticized in SZ, the monological Scientist curious about and transfixed by phenomena, asleep to his own unheeded intentional comportments to the world. The Scientist disenchants the world by dissecting it with analytical reason and foisting his own conceptual straightjackets on things with a view to seizing their “essence,” and thus takes things, literally, only on his own terms. In Division II of SZ this comportment is described as “making-present.”
The second comportment is the Scholar, who represents Dasein as awakening to and reflecting on the existential-ontological structures that govern its engagement with the world and, by rendering itself transparent to those structures, seizing itself in its freedom unto death, toward its ownmost end and ultimate possibility. This is the “authentic” comportment championed in Division II, which enacts a non-conceptual way of thinking and assumes a place in and towards Being, yet draws up short at the transcendental horizon of temporality. The “way in which escstatical temporality temporalizes,” what makes the projection of Dasein’s existence possible, indeed, whether and how “time manifests itself as the horizon of Being” is what calls for interpretation (Heidegger, 1962, 488).xiii Yet interpretation, by definition, cannot overstep that very horizon, because meaning and sense can only be made and registered on this “side” of the temporal “border.” The project to think toward being thus fails, and Dasein is cast back upon itself in its having-been, and this calls for a new approach. This is the state of the Scholar, who has pushed rational discourse to its limit, and is left wanting and waiting for some clue as to how to proceed on the way towards Being.
The third figure, that of the Teacher, embodies a disposition unrepresented yet certainly hinted at in SZ: Gelassenheit. Whereas the prior two positions were subjectivistic insofar as they thought toward Being, the Teacher endeavors to think from Being, to keep silent about and wait for the temporalizing of ecstatic temporality—here called the “regioning” of “that-which-regions”—but not in such a way as to be frustrated by the lack of an answer, to be stymied about failing to find the words or concepts with which to interpret or locate the meaning of Being. The Teacher’s discourse is thus properly characterized as trans-logical.
Gelassenheit is not “giving up”; still less it is “cracking the code” of Being. As the translators note, “[Gelassenheit] is thinking which allows content to emerge within awareness, thinking which is open to content…meditative thinking begins with an awareness of the field within which these objects are, an awareness of the horizon rather than of those objects of ordinary understanding” (Heidegger, 1966, 24).xiv More specifically, Heidegger is claiming that all thinking necessarily begins this way, and so a thinking that explicitly acknowledges this fact enjoys a more primordial relationship with Being, and therefore with thought itself. This necessity is neither logical nor causal, nor it is contained in the nature of a substance called “human being.” Indeed, Heidegger makes it clear at the start that “the question concerning man’s nature is not a question about man” (Heidegger, 1966, 58).xv To go against this grain and attempt to calculate, plan, plot, represent, or frame Being in any totalizing manner is thus at once a perversion of both Being and thinking. This is surely why, as Peter Kreeft points out, “Heidegger uses a word designating what Being does (“regions”) rather than what it is” (Kreeft, 1971, 543).xvi
To be released toward things is to wait upon Being.xvii “Waiting” itself is defined two ways in the CP. These two definitions are tightly bound to the two conceptions of time contrasted in SZ. The first is the ordinary practice of “waiting for” things, events, occasions, etc. This waiting toward things is grounded in a making present which neutralizes the future qua possibility by interpreting it merely within the narrow scope of the desires, goals, and objectives of the present, following the rigid dictates of the schedule, the calendar, or the scheme. This fixing of the future is at once the constriction of the present, robbing the present of its possibility and significance by interpreting the “now” as a solipsized point in a succession of nows that is separated from the object that Dasein awaits. The ecstatical structures are thus dissociated and/or repressed, Dasein disperses itself among and invests itself in its worldly entanglements, and it fails to hold itself together precisely by rushing around trying to fix and control things; Dasein is ready for nothing because it is trying to be ready for everything, foreclosing its possibilities by trying to plan for all of them. The structures of involvement delineated in Division I of SZ—the “for-the-sake-of-which,” the “in-order-to”, etc.—correlate roughly with this notion of “waiting for.”
The second definition of waiting, “waiting upon,” is practiced without the expectation of the fulfillment of an intention. Indeed, it is characterized by the lack of any such intention. This cessation of intentional relations is indicative of an erosion of any notion of a “subject” with will, desire, self-sameness, and a shift in the locus of identity and the seat of action towards Being and away from Dasein. As the Scholar remarks: “the relation between that-which-regions [i.e., Being] and releasement, if it can still be considered a relation, can be thought of neither as ontic nor as ontological,” only, adds the teacher, “as regioning” (Heidegger, 1966, 76).xviii There is thus a shift in the language Heidegger uses to describe the matter of the conversation: not the “meaning of Being” (SZ) but the “nature of thinking.”xix To wait upon Being thus connotes service. The active connotations of freedom, authentication, individuation, and seizing one’s destiny that color SZ give way to more passive notions of serving, waiting, allowing, etc. Put differently: there is a shift in emphasis from existentiality to facticity, from man’s projecting to Being’s throwing.
Yet those so released are not merely “slaves” of Being. The Scientist observes that releasement “is in no way a matter of weakly allowing things to slide and drift along,” and “lies beyond the distinction between activity and passivity” (Heidegger, 1966, 61).xx Heidegger is not condoning an ascetic denial of world and will along the lines of Schopenhauer’s pessimism; releasement is most definitely not a renunciation that “floats in the realm of unreality and nothingness” (Heidegger, 1966, 80).xxi Similarly, Suzuki dismissed the
popular view which identifies the philosophy of Schopenhauer with Buddhism. According to this view, the Buddha is supposed to have taught the negation of the will to live, which was insisted upon by the German pessimist, but nothing is further from the correct understanding of Buddhism than this negativism. The Buddha did not consider the will blind, irrational, and therefore to be denied; what he really denies is the notion of ego-entity due to Ignorance, from which notion come craving, attachment to things impermanent, and the giving way to egoistic impulses (Barrett, 1956, 157).xxii                   Anticipatory resoluteness still has a place within releasement: “one needs to understand ‘resolve’ as it is understood in Being and Time: as the opening of man particularly undertaken by him for openness…which we think of as that-which-regions” (Heidegger, 1966, 81).xxiii Again, we are not permitted to think of openness as something “out there” ontologically separate from Dasein, since we have been told explicitly that terms such as ontic, ontological, relation, and thing either no longer apply in the former sense, or no longer apply, period.
The type of comportment Heidegger champions is thus active in so far as it calls for an adjustment in Dasein’s attunement, but not in the sense of operating upon any object in the world-horizon with a view toward engineering a different and desired state of affairs. Heidegger thus refers to it as a “trace of willing”; it is passive insofar as it holds itself steadfast in light of the knowledge that none of its actions can directly “get through” to Being and, more importantly, it ceases to resent or repress this inescapable fact (Heidegger, 1966, 51).xxiv As Peter Kreeft points out, a higher acting is concealed in releasement than is found in all the actions within the world…. Not only do we become supremely (though effortlessly) active as a result of releasement, but we must exercise the most strenuous activity in order to reach its inactivity, much as the Zen monk must beat his head against the stone wall of his koan with all his energy until his head splits and his brains spill out into the universe where they belong (Kreeft, 1971, 553).xxv Heidegger is clear on this point:

“Releasement toward things and openness to the mystery never happen of themselves. They do not befall us accidentally. Both flourish only through persistent, courageous thinking” (Heidegger, 1966, 56).xxvi

On a similar note, Joan Stambaugh remarks that “Heidegger’s idea of Austrag (perdurance, sustained endurance) bears a striking resemblance to Dogen’s ‘sustained exertion,’ the ‘highest form of exertion, which goes on unceasingly in cycles from the first dawning of religious truth, through the test of discipline and practice, to enlightenment and Nirvana.’ These two related ideas both implicitly have to do with time” (Stambaugh, 1987, 285).xxvii American Zen roshi Richard Baker once remarked that satori, or enlightenment, is an accident, and that meditation makes one accident prone. Meditation (zazen) is the preparation, the work that renders the self receptive to satori but does not directly trigger it. Speaking about the notion of “waiting upon,” Kreeft notes:

“Like a Zen master, Heidegger does not tell us what to do, only what not to do. And in response to the natural question complaining of the resulting disorientation, he intensifies instead of relieving the disorientation, again like a Zen master” (Kreeft, 1971, 535).xxviii

In a crucial but qualified sense, there is a process of spiritual “development” in Zen, but it not a teleological process. Zen practice is not the cultivation of positive qualities or characteristics; it is not about conditioning, but about deconditioning—hence, what not to do. The Zen analogue of releasement is “non-attachment,” and its purpose is not to crush and stifle the thought-process, but to let all phenomena—sensory perceptions, emotional tensions, concepts, etc.—simply go, to liquidate one’s cognitive assets, to exhaust the discursive mind, and gradually cease to identify with any bodily (gross) or mental (subtle) “substance”, until the bodymind itself is “dropped.”
Before leaving the CP, it is important to mention the discussion of ego, experiment, and the Being-process contained therein. Heidegger’s end of philosophy is really just the end of philosophy as the mirror of nature,xxix the end of a conception of science that regarded itself as unconditioned but was actually, according to Heidegger, only a historical emergence:
Scientist: ‘When I decided in favor of the methodological type of analysis in the physical sciences, you said that this way of looking at it was historical…. Now I see what was meant. The program of mathematics and the experiment are grounded in the relation of man as ego to the thing as object.’ Teacher: ‘They even constitute this relation in part and unfold its historical character…. The historical consists in that-which-regions…. It rests in what, coming to pass in man, regions him into his nature’ (Heidegger, 1966, 79).xxx

Thus the “ego” and its project of measuring, classifying, and discovering the world emerged over time, yet it tries to burn its birth certificate and cover up its contingency by grounding itself in some transcendent Other.
Two passages from WIM? powerfully capture the relationship between reason and the nothing, the egoic and the trans-egoic, the logical and the trans-logical: “If the power of the intellect in the field of inquiry into the nothing and into Being is thus shattered, then the destiny of the reign of ‘logic’ in philosophy is thereby decided. The idea of ‘logic’ itself disintegrates in the turbulence of a more original questioning” (Heidegger, 1977, 105).xxxi Compare Suzuki: “[Zen] does not challenge logic, it simply walks its own path of facts, leaving all the rest to their own fates. It is only when logic neglecting its proper functions tries to step into the track of Zen [or, for Heidegger, tries to soberly and seriously dismiss the nothing] that it loudly proclaims its principles and forcibly drives out the intruder” (Barrett, 1956, 21). Heidegger:
We can of course think the whole of beings in an ‘idea,’ then negate what we have imagined in our thought, thus ‘think’ it negated.” In this way do we attain the formal concept of the imagined nothing but never the nothing itself… the objections of the intellect would call a halt to our search, whose legitimacy, however, can be demonstrated only on the basis of a fundamental experience of the nothing (Heidegger, 1977, 99).xxxii

I want to emphasize that Zen, as Suzuki indicates, has a decidedly more “laissez-faire” attitude toward reason: it is only when reason purports to extend its validity claims beyond its proper sphere that problems ensue. Heidegger’s antagonism toward calculative thinking, I am claiming here, is somewhat exaggerated and fails to recognize the positive aspects of reason, aspects which, in fact, allot him the space to sight his quarry.
Heidegger initially regarded this birth of the ego as a deliberate choice made by a particular culture yet, as Michael Zimmerman points out, he eventually came to abandon this view and saw the rise of calculative thinking as but another regioning of that-which-regions.xxxiii This “Being-centric” view is operative as early as 1929 when Heidegger speaks in WIM? of “the direction from which alone the nothing can come to us,” and declares that “the nothing itself nihilates,” and that this is the basis of any affirmation or negation, i.e., any logical predication, on the part of humans” (Heidegger, 1977, 98, 103).xxxiv Zen could not agree more with the latter part of this sentence, yet I need to point out a crucial difference. Heidegger approaches the emergence of the ego from what we might call its decidedly phylogenetic dimension—as a kind of thinking in whose grip the West has unfolded and by whose limitations its has been constrained. Zen, however, focuses on the ontogenetic dimension through a set of pointing out instructions that get the individual to realize and disarm the self-contractions, interpretative projections, and karmic patterns that distort his experiences of himself, others, and the world.xxxv Zen is concerned with acquainting the individual with the genealogy of his or her own ego and breaking the spell of self-separateness. Moreover, Zen would find later Heidegger’s tendency to ascribe agency to Being/Nothing itself as bizarre and as harboring a residual dualism.
B. The Nature of Language: “The Language Barrier” and “Planetary Thinking
While Zen generally avoids philosophy—at least in its representational mode—and focuses on transformative practices, this is not to say that it has no philosophical heritage or support. If we were forced to distill a systematic Buddhist apologetics from the Eastern philosophical tradition that serves as the philosophical roots of Zen, it would probably be “negative dialectic.” The negative dialectic was put forth as a philosophical-pedagogical method by the second century Mahayana Buddhist thinker Nagarjuna, and it is the founding idea of Zen methodology to this day. Like Heidegger’s later writings, which scrupulously guard against any lapse into lazy metaphysical thinking by vigilantly reframing the question of the meaning of Being, negative dialectic is supremely practical in that it refuses to let any positive statement about the Absolute/Emptiness/Being stand and coagulate into a stale and rigid dogma, because the experiencexxxvi in question—satori, i.e., Enlightenment–is meaning- and content-less. I am referring to Heidegger’s nearly constant efforts to shift the terms of the debate to combat and dispel the forgetfulness that comes to obscure the originary experience of Being out of which metaphysics arises and by which it is possible in the first place. Richardson gives one such example:
the effort to lay bare the foundations of ontology was called in the early years ‘fundamental ontology,’ but after 1929 the word disappears completely. In 1949 we are told why:

the word ‘ontology’…makes it too easy to understand the grounding of metaphysics as simply an ontology of a higher sort, wheras ontology of a higher sort, which is but another name for metaphysics, must be left behind completely (Richardson, 2003, 15).xxxvii

As Zimmerman points out, Nagarjuna likewise feared that his message would be distorted into a “metaphysics of experience” and struggled to resist this reifying tendency: “Nagarjuna warned that conceiving of absolute nothingness as such a transcendental origin would lead to a metaphysics of sunyata and, inevitably, to a new kind of dualism” (Zimmerman, 1993, 253).xxxviii Ken Wilber summarizes Nagarjuna’s position:
above all, for Nagarjuna, absolute reality (Emptiness) is radically Nondual (advaya)—in itself is neither self nor no-self, neither atman nor anatman, neither permanent nor momentary/flux. His dialectical analysis is designed to show that all such categories, being profoundly dualistic, make sense only in terms of each other and are thus nothing in themselves (Wilber, 2000, 719.xxxix

Later, I will show how this so-called “apophatic” approach most certainly does not mean, however, that language is abandoned in Zen; fingers can and must be pointed, so long as they are not taken for the moon itself.
Consider Suzuki’s account of the Buddha’s own historical situation:
At the time of the introduction of Zen into China, most of the Buddhists were addicted to the discussion of highly metaphysical questions, or satisfied with the merely observing of the ethical precepts laid down by the Buddha or with the leading of a lethargic life entirely absorbed in the contemplation of the evanescence of things worldly. They all missed apprehending the great fact of life itself, which flows altogether outside of these vain exercises of the intellect and the imagination (Barrett, 1956, 20).xl                                      Five words should be highlighted here: addiction, satisfaction, lethargy, absorption, and vanity. What is Suzuki portraying but an intellectually soporific climate of metaphysical abstraction and ascetic detachment that, shall we say, induced a collective forgetfulness of Being? This suggests that Heidegger’s basic claims—whether about the status of the question of the meaning of Being in Western culture, the Being-process itself, or the nature of thinking/language—need not and cannot be confined and applied exclusively to the West.
In the “Letter on Humanism” Heidegger writes that “‘subject’ and ‘object’ are inappropriate terms of metaphysics, which very early on in the form of Occidental ‘logic’ and ‘grammar’ seized control of the interpretation of language. We today can only begin to descry what is concealed in that occurrence.”xli In the DL, Heidegger works to chip away at this Euro-/logo-centrism by making language itself the object of the dialogue, rather than “the meaning of Being”(SZ) or “the nature of thinking” (CP). The dialogue takes place between an Inquirer—Heidegger himself—and a Japanese Germanist whom we now know to have been Tezuka Tomio. The DL is based on a real conversation that took place roughly thirty years prior to Heidegger’s reconstruction. In “An Hour with Heidegger,” Tomio recounts his conversation with Heidegger: “When I mentioned ‘the open’ as a possible translation of ku (emptiness) [or, in Sanskrit, sunyata]… [Heidegger] was pleased indeed! ‘East and West,’ he said, ‘must engage in dialogue at this deep level. It is useless to do interviews that merely deal with one superficial phenomenon after another’” (May, 1996, 62).xlii
Referring to previous discussions with one “Count Kuki,” Heidegger confesses: “The danger of our dialogues was hidden in language itself, not in what we discussed, nor in the way in which we tried to do so” (Heidegger, 1971, 4).xliii The Japanese replies: “The language of the dialogue constantly destroyed the possibility of saying what the dialogue was about” (Heidegger, 1971, 5).xliv The connection to Nagarjuna’s negative dialectic should be obvious. David Loy succinctly sums this up: “any theory of nonduality, if it is to retain the prescriptive aspect of the nondual philosophies, must be paradoxical and self-negating” (Loy, 1988, 176).xlv Whether or not Heidegger’s thought can rightly be classified as “nondual,” a topic I will return to, is certainly problematic; as Loy notes, he certainly “affirms a paradox of thinking and no-thinking,” yet his focus on the “descriptive aspect” and failure to include a “prescriptive aspect,” as I will discuss below, is what ultimately sets him apart from the nondual traditions of Zen, Nagarjuna’s Madyamika, and Taoism.
One exchange in the DL details an actual historical example of how the metaphysical handicap of Western languages bungled the interpretation of Heidegger’s ideas. The Japanese asserts that “we in Japan understood at once your lecture [WIM?] when it became available to us in 1930…. We marvel to this day how the Europeans could lapse into interpreting as nihilistic the nothingness of which you speak in that lecture. To us, emptiness is the loftiest name for what you mean to say with the word ‘Being’” (Heidegger, 1971, 19).xlvi The “nihilistic nothingness” alluded to here is basically the “Sartrian” nothingness which Heidegger took to be a serious distortion of his work; indeed, the very title of Sartre’s magnum opus, Being and Nothingness, is emblematic of this confusion. As William Barrett discusses in detail in his study of existentialism, Irrational Man, this crucial difference—between “no-thingness” and “nothingness”—is very much the iron curtain between East and West” (Barrett, 1958 233-4, 285).xlvii The passage quoted above also draws out a more general but hardly vague or insignificant point: Heidegger’s philosophy powerfully influenced the Japanese intellectual culture of the time, a culture thoroughly versed in and informed by the Zen Buddhist tradition.xlviii The Japanese have produced no less than seven translations of Being and Time.
It is worthwhile comparing Heidegger’s non-Western no-thingness with what Suzuki has to say about “emptiness” or sunyata, which he claims is one of the hardest words for which to find an English equivalent: “[Sunyata] is not a postulated idea. It is what makes the existence of anything possible, but it is not to be conceived immanently, as if it lay hidden in or under every existence as an independent entity. A world of relativities is set on and in sunyata…. The doctrine of sunyata is neither an immanentalism nor a transcendentalism” (Barrett, 1956, 261).xlix This is entirely consonant with later Heidegger’s abandonment of the language of “transcendence,” since this would imply some sort of “progress.” One cannot get “closer to” or “further from” sunyata via some process of intellection. Referring to a passage from The Diamond Sutra, Suzuki writes that Zen “means nothing less or more than a non-teleological interpretation of life” (Barrett, 1956, 265).l
While Heidegger admits that his naming of language as the “house of Being” was “clumsy,” he nevertheless maintains that “Europeans dwell in an entirely different house than Eastasian man,” and that “a dialogue from house to house remains nearly impossible” (Heidegger, 1971, 22).li Heidegger’s position with regard to the possibility of “inter-house dialogue” is never made entirely clear, since, by this time, he has positively abandoned the allegedly metaphysical pitfall of attempting to occupy a definite position. This ambivalence over the potential overcoming of the language barrier is repeated in a message Heidegger sent to an East-West Philosopher’s Conference held in honor of his thought in 1969:
Again and again it has seemed urgent to me that a dialogue take place with the thinkers of what is to us the Eastern world…. The greatest difficulty in the enterprise always lies, as far as I can see, in the fact that with few exceptions there is no command of the Eastern languages in Europe or the United States…. [These doubts hold] equally for both European and East Asian language, and above all for the realm of their possible dialogue. Neither side can of itself open up and establish this realm (Quoted in May, 1996, 12-13).lii

In The Question of Being, Heidegger stresses that we are “obliged not to give up the effort to practice planetary thinking,” and that “there are in store for planetary building encounters for which participants are by no means equal today. This is equally true of the European and of the East Asiatic languages and, above all, for the area of possible conversation between them” (Quoted in Thompson, 1986, 235).liii  As we saw above, in the DL Heidegger suggested to his Japanese counterpart—in the midst of their conversation–that such a conversation is nearly impossible, yet here he proclaims that it is all but necessary. Heidegger’s skepticism over the possibility of trans-linguistic mutual understanding seems strange, especially since there are cases in which the Japanese clearly had a better intuitive grasp of his ideas than Western thinkers. Fencing off different language worlds as incommensurable is perhaps just as dangerous as divvying people up according to a standard of authenticity/inauthenticity, because it naively treats “language worlds” as present-at-hand things, solipsized bubbles with clearly defined and impenetrable borders that develop in isolation from each other. Moreover, it is never made clear how such a transcendental insight can even be obtained by a being imprisoned within the confines of one such language world.
The Japanese in the DL—who, we must recall, actually bothered to undertake the task of learning an Occidental language—remarks that “while I was translating, I often felt as though I were wandering back and forth between two different language realities, such that at moments a radiance shone on me which let me sense that the wellspring of reality from which those two fundamentally different languages arise was the same” (Heidegger, 1971, 24).liv From this, Heidegger concludes that the Japanese did not seek to yoke both languages under a “general concept,” which would be precisely to try and draw one language—the Eastasian—under the rubric of another—the Occidental. In light of this, the two speakers agree that the “same” referred to above can only be “hinted” at. And though Heidegger’s “exacting poetry” is geared toward just such a hinting and is meant to thwart the metaphysical designs of such a “general concept,” he says at the outset of the DL that he desires “the assurance that European-Western saying and Eastasian saying will enter into a dialogue such that in it there sings something that wells up from a single source” (Heidegger, 1971, 8).lv
This lingering attachment to language is what demarcates Heidegger from Zen. As John Caputo points outs,
The essential being (Wesen) of Zen is an experience which is translated directly, from mind to mind, from master to disciple. Language for Zen is like a finger pointing to the moon; it must be disregarded in favor of a ‘direct pointing’ without fingers, or words…. That is why where Bodhidharma says, ‘No dependence upon words and letters,’ Heidegger says that language is the house of Being: ‘Where words give out no thing may be.’ (Caputo, 1986, 216).lvi

There is certainly some truth to this, though I do not think the difference is as stark as Caputo maintains. For one thing, from the Zen perspective, to be dependent upon words and to use words are quite different things. Interestingly enough, Heidegger remarks in the DL that “language is more powerful than we,” indicating that so long as we trade in tokens of whose meaning, weight, and origin we are ignorant, we are dependent on language. Do we not then achieve a kind of liberation from and attain a new relationship to language once we have awakened to its limitations and strive after a more authentic saying? Zen masters employ not only abrupt and abrasive pedagogical techniques such as slapping a student’s face or hitting him with a stick, but also an enigmatic, elusive, dissonant grammar, something very much like an “exacting poetry.” From Heidegger’s perspective, as I showed above, the naming of language as the house of Being is not to be taken too literally, and the quote Caputo cites to bolster his claim could easily have been uttered by a Zen master, in the sense that “no thing” denotes “emptiness” or “no-mind.” David Loy captures the Heidegger-Zen relationship more adequately:
Heidegger, if not a philosopher, is still a thinker, which the Zen student is not…. both affirm a paradox which might be called ‘the thinking of no-thinking.’ But they emphasize different aspects of it. In meditation, one is concerned to dwell in the silent, empty source from which thoughts spring; as thoughts arise, one ignores them and lets them go. Heidegger is interested in the thoughts arising from that source (Loy, 1988,175).lvii

As we saw in the CP above, Heidegger thinks that Being needs human beings, and this claim recurs in the DL: “the word ‘relation’ does want to say that man is in demand, that he belongs within a needfulness that claims him…. Hermeneutically, that is to say, with respect to bringing tidings, with respect to preserving a message” (Heidegger, 1971, 32).lviii This is what Heidegger calls the “hermeneutic relation of the two-fold.” Where Zen is content to lets thoughts go, Heidegger labors to preserve them in some form. Yet Zen would also concede that defending, preserving and transmitting the dharma is the utmost responsibility of those who have realized it; after all, that is the essence of the bodhisattva, the awakened being who vows to remain in samsara until all sentient beings are enlightened. This sounds suspiciously like “bringing tidings,” even though the final “message” is always a stranger to words and a frank declaration of what is always already the case. Suzuki elaborates:

“Zen would not be Zen if it were deprived of all means of communication. Even silence is a means of communication; the Zen masters often resort to this method…. The conceptualization of Zen is inevitable; Zen must have its philosophy. The only caution is not to identify Zen with a system of philosophy” (Barrett, 1956, 260-1).lix

Indeed, as Heidegger and the Japanese agree in the DL, to be silent about silence itself would be truly authentic saying. This is surely what they are after in defining “dialogue” as “a focusing on the reality of language,” alluding to the sense in which silence is a positive mode of discourse, perhaps even its primordial mode.

II. The Meaning of Being: Early Indications in Being and Time
In this section I briefly explore how four themes in SZ—death, fallenness, facticity, and temporality—relate to Zen. Though there is no direct evidence that Heidegger was significantly influenced by Eastern thought in his pre-SZ phase,lx this does not rule out the possibility that his early formulations demonstrate what Parkes calls a “pre-established harmony” with basic Taoist and Zen ideas. Reinhard May makes the strong claim that Heidegger’s notion of thinking-poeticizing received its (‘silent’) directive…from ancient Chinese thought—for metaphysics, so conceived, was never developed there. Being neither indebted to an Aristotelian logic, nor receptive to an ontology involving a subject-object dichotomy, nor, above all, being conditioned by any theology, ancient Chinese thought was completely remote from the assertion of ‘eternal truths,’ which belong according to Heidegger ‘to the residue of Christian theology that has still not been properly eradicated from philosophical problematics’ (Heidegger, 1962, 229).                                  While May’s claim is backed up by an impressive body of evidence, that evidence is largely circumstantial,lxi and it therefore fails to prove beyond a reasonable doubt that Heidegger was directly influenced by Eastern thought from the beginning.
What are the elements that contributed to Heidegger’s novel conception of death, and where did he obtain them from, if anywhere? In the footnotes to H249 in SZ, which outlines the investigation of death, Heidegger encourages the reader to consult Dilthey’s and Simmel’s writings on death, and to “compare especially Karl Jaspers’ Psychologie der Weltanschauungen…especially pp. 259-270…. Jaspers takes as his clue to death the phenomenon of the ‘limit-situation’ as he has set in forth—a phenomenon whose fundamental significance goes beyond any typology of ‘attitudes’ and ‘world-pictures’” (Heidegger, 1962, 495).lxii We are to understand by this that the full import of the “limit-situation” exceeds the bounds of any psychology, and is only properly approached from an existential-ontological perspective, which cannot itself by the subject of a typology and/or conceptual schematization, since it is the ground of all such categorizing. Nevertheless, as Parkes points out, “the concern with totality, an experiential relation to death, and the idea of death’s ‘entering into’ experience figure importantly in the existential conception of death that Heidegger would elaborate in SZ,” and all of these components are contained in the cited passages from Jaspers. Moreover, on page 262 of the same work, Jaspers commences a brief discussion of the Buddhist conception of death, framing it, Parkes observes, as “thoroughly nihilistic and pessimistic—an account apparently influenced by the (rather unreliable) interpretations given by Schopenhauer and Nietzsche: ‘Death and transitoriness give rise in the Buddhists to a drive for the eternal reign of the peace of nothingness’” (May, 1996, 265). The Buddhist path, Jaspers claims, is essentially a death cult bent on renunciation, quietism, indifference, and pessimism.
There are two points we should note here: one, Jaspers commits the classic Western fallacy, misinterpreting Buddhistic nothingness in precisely the same way most of Heidegger’s European interpreters would misunderstand his treatment of the Nothing in WIM?; and two, at this early stage, Heidegger was already aware of an Eastern interpretation of death, albeit a misinterpretation, and was at this time engaged in forging his own conception, a conception without precedents in the Western tradition. As Parkes relays, it was precisely the originality of Heidegger’s approach to death and nothingness within the Western tradition that prompted Kyoto School member Tanabe Hajime to attend his 1923 lecture course entitled “Ontology: The Hermeneutics of Facticity,” and pen the first commentary on Heidegger’s work ever published.lxiii “Heidegger,” Parkes reports, “had ample occasion to be impressed by the visitor from Japan, having gladly acceded to his request for private tutorials in German philosophy” at a time when his existential conception of death was still fomenting (May, 1996, 82).lxiv In light of these circumstances, Parkes wagers that since Heidegger had written on Jaspers’ idea of death as a Grenzsituation, and read his discussion of the Buddhist attitude towards death, it is probable that this topic came up in his conversations with Tanabe. And if it did, Tanabe would have explained to him that the attitude toward death of the later (Mahayana) schools of Buddhism [e.g., Zen] is…positive and life-promoting—just as their understanding of nothingness is by no means nihilistic (May, 1996, 85).lxv                                                                         The point here is that this understanding of nothingness, which Heidegger would hint at in SZ via the existential conception of death and sketch more explicitly in WIM? two years later, is found in none of the Western sources from which he drew, but was all but obvious to a Japanese thinker with whom he was in close consort. Ultimately, it is not important whether we regard this as a matter of direct influence or independent congruence, but the similarity cannot be denied.
Heidegger’s discussion of death is similar to the Buddhistic conception of death in several respects; ultimately, however, is it markedly different. Heidegger writes that temptation, tranquilization, and alienation are distinguishing marks of the kind of Being called ‘falling.’ As falling, everyday Being-towards-death is a constant fleeing in the face of death. Being-towards-the-end has the mode of evasion in the face of it—giving explanations for it, understanding it inauthentically, and concealing it (Heidegger, 1962, 298).lxvi             Earlier on in Division I, he defines this “falling” clearly: “Fallenness into the ‘world’ means an absorption in Being-with-one-another, in so far as the latter is guided by idle talk, curiousity, and ambiguity.” The translators are specific: “The idea is rather of falling at the world or collapsing against it” (Heidegger, 1962, 220).lxvii So far, Zen is in basic agreement. The majority of the time humans stumble through life, invest their energies and hopes in objects, and flee from themselves by pretending to be familiar with themselves. Humans become addicted to and entangled with substances, and begin to interpret their sustenance and even salvation exclusively in terms of them. For Buddhism, the basis of all suffering (dukkha), including the fear of death, arises from tanha—from clinging to, investing oneself in, and ultimately identifying with transitory phenomena, with entities in the world. Heidegger’s notions of fallingness, entanglement, and dispersal are nearly identical.
As such, the so-called “Great Death”—the dissolution of the ego—is deferred, and the self contracts, attaches itself to passing phenomena, and opts to die less radical and less painful deaths as all of the entities it clings to pass away. The Zen analogue of falling is ignorance. Out of a perceived lack, humans hustle about trying to attain security, comfort, and stability by hanging onto what they wrongly perceive to be real, persisting, genuine objects. The so-called “cycle of birth-and-death” (samsara), stripped of its mythological connotations of reincarnation, actually means being dependent on both outward objects and the sense of self-separateness, the ego. This is what Zen calls the “co-dependent arising” of phenomena, the self-contraction that immediately generates karma, the chains of causation and patterns of influence that induce suffering. Karma is the Zen analogue of facticity; it refers to the various circumstances into which people are thrown, the “debts” they inherit and the limits by which they are bound. As such, people interpret their death in terms of release from such bondage, that is, they hope to be reborn with a clean slate, purged of all concupiscence. So by identifying with their karma—their feelings of lack, desire, limitation, etc., all of which are erroneously tied up with birth—they create a conception of death, which entails a futural rebirth, etc., ad infinitum.
The way out of samsara is to realize that the cycle is an illusion that is projected when the self objectifies both karma and nirvana, birth and death, bondage and freedom. For Zen, birth and death do not primarily denote physiological events; indeed, these are derivative, in much the same way that Heidegger claims that there are inauthentic, derivative modes of interpreting death or “end”, such as “stopping”, “getting finished”, “perishing”, and “demise” (Heidegger, 1962, 289-292).lxviii As such, Zen agrees with Heidegger that an “existential analysis is superordinate to the questions of a biology, psychology, theodicy, or theology of death,” (Heidegger, 1962, 292) even though it has a very different idea of what properly constitutes an “existential analysis” and a conception of psychology that is very different from the Western one Heidegger is reacting to.lxix For Zen, birth and death are epiphenomenal concepts that are generated by the consolidation of the ego.
Heidegger makes clear that to free oneself for death, to awaken from the dream fabricated by “the They-self” that blinds Dasein to its final possibility and represses it as a possibility, is to gather oneself together from out of one’s dispersion in worldly attachments and to concentrate oneself resolutely in anticipating death. This stance is “anticipatory” only with respect to Heidegger’s notion of “primordial temporality,” not toward death as a future “now” that will eventually “occur.” Heidegger also appears to claim that adopting either an optimistic or a pessimistic attitude toward death are equally repressive, since all of these latter stances fix death as an imminent, actual, forthcoming event-in-the-world, i.e., as something present-at-hand. This squares with Suzuki’s claim that Zen is neither an immanental pessimism nor a transcendental optimism.
All of the inauthentic responses toward death, Heidegger claims, arise from treating death as an object, in which case fear, not anxiety, is the dominant state-of-mind. Fear is in all cases the repression of anxiety. And while each temporal ecstasis always comes together with all of the others, and though all of them are explicitly held together in the “moment of vision” or “authentic present,” Heidegger ascribes a certain primacy to the future: “Ecstatico-horizonal temporality temporalizes itself primarily in terms of the future” (Heidegger, 1962, 479).lxx Just as the inauthentic comportment toward death robs death of its significance and objectifies it, inauthentic temporality, governed by what Heidegger calls a “making-present,” represses the past and the future by treating them merely as receding and forthcoming “nows.” In both cases, Dasein must collect itself from its dispersion and absorption in its proximate concerns. This emphasis on futurity, possibility, and anticipation is what distinguishes Heidegger’s concepts of death and time from the Zen perspective.
Referring to the “within-time-ness” characteristic of inauthentic temporality, Heidegger claims that the “‘now’ is not pregnant with the ‘not-yet-now.’” That is, in falling, we have uprooted ourselves from the “stretching-along” characteristic of authentic temporality; we orient ourselves merely in terms of the present instead of the future, which is to say, we fail to orient ourselves. Speaking from the Buddhist perspective, David Loy asks: “what if there is a ‘now’ which is pregnant with the ‘not-yet-now’?.” He notes that Heidegger rejects the mystical notion of an “eternal now” on the grounds that it is derived from the traditional conception of time and is therefore a mere abstraction. Loy questions whether or not Heidegger’s alternative of authentic temporality is really adequate:
The problem with both of Heidegger’s alternatives is that both are preoccupied with the future because in different ways both are reactions to the possibility of death; thus both are ways of running away from the present. Inauthentic existence scattered into a series of disconnected nows is “a fleeing in the face of death”; authentic life pulled out of this dispersal by the inevitable possibility of death is more aware of its impending death, but still driven by it. This means that neither experiences the present for what it is in itself, but only through the shadow that the inescapable future casts over it. What the present might be without that shadow is not considered in SZ (Loy, 1988, 15).lxxi

Heidegger would likely respond that Loy is simply lapsing back into inauthentic temporality by pointing to what the present is “in itself,” but this simply calls us back to Bodhidharma’s warning: “No dependence on words.” In short: I am suggesting that there are two kinds of “eternal now.” The first, criticized by Heidegger, is a “conceptual” eternity that is opposed to time and is indeed both derived from the ordinary experience of time and driven by death. This we might call “ego-” or “other worldly-” “eternity”; on this point, Buddhism and Heidegger are in complete agreement. The second kind, however, is what we have all along been calling nirvana. When Zen masters say that birth is no-birth, that death is no-death, they are neither kidding nor speaking metaphorically. The radical claim, to be verified only in experience by following the meditative injunction and checking one’s results in a community of the experienced, is that birth and death, that past, present, and future, all dissolve when the ego dissolves. One is no longer afraid of or anxious over death, not because one is resolved, but because one realizes that there is no-thing to be afraid of or over anxious over, and, more importantly, that there isn’t even anyone to be afraid or anxious. Moreover, this entails that the entire dualistic business of finding oneself stuck or thrown into a world with finite possibilities (an imperfect, “this-worldly” samsara), speculating an endless eternity out a feeling of desire/lack (an “other-worldly” heaven) and, finally, violently laboring to transcend the present by resolutely striding into the future, are all the desperate flailings of the ego trying to deny its groundlessness. In this way, we might say that through his treatment of death, fallenness, facticity, and temporality in SZ, Heidegger comes very close to Zen’s radical nonduality, yet draws up short. And though he later recanted the residual metaphysics of subjectivity that he came to believe encumbered SZ, even his later works bears the marks of a residual—though unmistakable—dualism. As John Steffney sums up:
Although Heidegger’s attempt to think from Being, which became evident with his famous ‘turn,’ is admirable—the attempt to think from Being toward Dasein, not from Dasein toward Being—Zen would say that this reversal would have to be further radicalized, for both the attempts to think ‘toward’ Being or ‘toward’ Dasein are equally dualistic (Steffney, 1981, 52).lxxii
III. Heidegger’s Ambivalence
This is why I have suggested throughout that no matter which way Heidegger happens to be turning, leaning, or thinking—toward Being or from Being—and no matter how he is framing his question—the meaning of Being, the nature of thinking, or the nature of language—he is unquestionably in transit, on the go, in between two radically different ways of understanding human existence. Though he clearly had some minimal exposure to Eastern thought even from an early point in his career before the composition of SZ, and probably was, as Pöggeler claims, significantly influenced by it in his later career, I conclude that he remains tethered, albeit tenuously, to Western thinking. In the DL he remarks that the transformation of thinking he envisions is to be understood as a movement from one site—that of metaphysics—to another—which, obviously, is left nameless. Heidegger is perpetually adventuring in the wasteland between these two “poles”; as Steffney puts its, “because he could not break—entirely—through the matrix of ego-consciousness with its inherent bifurcations, his thinking was never genuinely trans-metaphysical. It was at best quasi-metaphysical” (Steffney, 1977, 352).lxxiii
While there are indications that he regarded the positive task of a dialogue between Western and Eastern thought—“planetary thinking”–as important and essential for the future, it appears that he was more concerned with the negative task of clearing away the calcified vestiges of metaphysics still enclosing the Western mind. One could even argue that they are two folds of the same task. In 1953, Heidegger wrote that “a dialogue with the Greek thinkers and their language…has hardly even been prepared yet, and remains in turn the precondition for our inevitable dialogue with the East Asian world” (Quoted in May, 1996, 103).lxxiv Clearly, Heidegger wanted to make absolutely sure that such a dialogue would, as it were, not get off on the wrong foot.
In closing, I suggest three basic criticisms of Heidegger’s overall approach: Heidegger reifies “the West,” he neglects to provide an account of human development, and he refuses to prescribe any practices to cultivate the primordial experience of Being he clearly felt Western culture to be so desperately in need of. The first can be traced to comments made in the famous Der Spiegel interview of 1966, in which Heidegger proclaimed that “a reversal can be prepared itself only from the same part of the world in which the modern technological world originated, and that it cannot come about through the adoption of Zen Buddhism or any other Eastern experience of the world…. Thinking itself can only be transformed by a thinking which has the same origin and destiny” (Quoted in May, 1996, 8).lxxv In light of my discussion of the language barrier and planetary thinking above, it is unclear precisely why this “origin” is properly framed as ancient Greece, rather than “the same” from which language springs. By drawing this line in the sand, Heidegger sets up a rigid distinction between East and West that echoes throughout his later works. Zimmerman sums up this phenomenon:
In making such a distinction between East and West, Heidegger not only tended to downplay the impact of Eastern thinking on the German philosophical tradition, but also seemed to be thinking metaphysically in accordance with a binary opposition between ‘East’ and ‘West,’ an opposition that seems to privelige the West as the origin of the technological disclosure of things that now pervades the planet (Zimmerman, 1993, 251).lxxvi                                                                           In short, Heidegger treats “the West” as something present-at-hand. However, Heidegger makes explicitly clear in the DL that he is not envisioning some sort of return to Greek thinking. It remains to be seen, then, in what sense we should approach his thinking as “Western.”
Zimmerman continues: “in calling for another beginning that would displace the Western metaphysical quest for the ultimate ground of things, Heidegger questioned the validity of the West’s claims to cultural superiority” (Zimmerman, 1993, 251).lxxvii True enough, yet the deeper question is about superiority per se, which we might generally construe as the problem of “verticality”—of hierarchy, ranking, and teleology. Caputo’s poststructuralist reading of Heidegger wants to level the ontological playing field. Referring to Heidegger’s colorful ruminations on the destining of the West in ancient Greece, Caputo writes that there is a dream-like, indeed I would say Camelot-like quality…to this discourse…. when [Heidegger] talks about the transition to the end of philosophy to the ‘new beginning,’ then he gives way to the hope which is the other side of nostalgia. Thinking becomes recollecting and aspiring; time is a circle in which what comes about in the primordial beginning traces out the possibility of what can come again. Such thinking is nostalgic, eschatological, a higher-order, more sublated version of metaphysics.” “Derrida was quite right, I think, to delimit Heidegger’s talk about ‘authenticity.’ It is Platonic and politically dangerous to go around dividing people up into the authentic and inauthentic (Caputo, 1986, xxii-iv ).lxxviii                                                                                                                        Zen agrees with the first criticism, but not with the second. Though I quoted Suzuki above as saying that Zen is a “non-teleological view of life,” this is not to say that it does not recognize degrees of spiritual development. Suzuki writes that it is impossible not to speak of some kind of progress. Even Zen as something possible of demonstration in one way or another must be subjected to the limitations of time. That is to say, there are, after all, grades of development in its study; and some must be said to have more deeply, more penetratingly realized the truth of Zen…. This side of Zen is known as its ‘constructive’ aspect…. And here Zen fully recognizes degrees of spiritual development among its followers, as the truth reveals itself gradually in their minds… (Barrett, 1956, 364)lxxix lxxix                                                   There is no “phallo-centrism” or “patriarchy” at work here, imposing some arbitrary standard or telos on an unsuspecting multitude; no vicious dichotomizing of people into authentic and inauthentic; no nasty elitism. On this matter, Zen is in complete disagreement with this de-mythologized version of Heidegger and the postmodern tradition that follows it. Heidegger fails to offer any account of human development because of his insistence in SZ that the existentiales are “permanent”—i.e., facticity, untruth, inauthenticity, “the They”, etc., cannot be overcome. Since the existential categories smack of the same metaphysical foundationalism of, say, Aristotelian teleology, Heidegger abandoned the discourse of authenticity and existentiality, which is to say, he abandoned structures, period. Yet Zen allows that we cannot help but acknowledge what I would term “fluid” structures of the selflxxx—referred to variously as karmas, yanas, skandas, sheaths, etc.—which certainly do coagulate and linger, yet which may ultimately be undone. And the more a person has sloughed off these inauthentic trappings, the more evolved, the more mature, the more developed he or she is said to be. This judgment, moreover, is made by a community of practitioners who have already, as it were, walked the path. Only in this very qualified sense are individuals deemed authentic or enlightened. Ultimately, for Zen, all humans possess buddhanature, yet they can fail to realize it, and it is this ignorance that creates the illusion of ignorant and enlightened.lxxxi
This relates directly to Heidegger’s ambivalent relationship toward rationality and modernity. For example, near the outset of SZ, Heidegger repeatedly refers to Dasein’s pre-conceptual understanding of Being, the basic, average, everyday way in which people go about their business and pursue their worldly engagements within a background called the world which they rarely attend to yet tacitly assume in all of their dealings. That is, they either never stop to thematize Being, it never arises as an issue, or they actively repress its emergence, yet they would be unable to even be engaged in the world without some dim, pre-thematic grasp of Being. In the final paragraph of the treatise, however, Heidegger remarks that “Being has been disclosed in a preliminary way, though non-conceptually” (Heidegger, 1962, 488).lxxxii While both the former and latter modes of disclosing Being are non-conceptual, there is a substantial difference. The pre-conceptual is thoroughly in the sway of the ontic and entangled with phenomena, while the latter has conceptually reckoned with its own existence and realized the poverty of both the average everyday (pre-conceptual) and the rational-scientific (conceptual) comportments and been propelled to interpret its own being, and Being itself, in an entirely different yet still non-conceptual nature, that is, trans-conceptual. Richardson’s attempt to thin this thicket does not lend much light: “Taken in its totality, Dasein is not a subject, but it is a self—a non-subjective, rather trans-subjective, or even pre-subjective self, sc. transcendence” (Richardson, 2003, 101).lxxxiii We are thus forced into speaking of Dasein as the “between,” yet this dialogical cipher still moves within a notion of duality.
The attempt to get back to Being—to re-awaken to the forgotten meaning of Being, re-peat a heritage, re-tap some dormant reservoirs, to return to the roots and origins—that inheres in Heidegger’s early and late work lends itself to the idea that the modern world, and the mode of cognition by which it was constituted, namely, monological reason or calculative thinking, is a great mistake, a collective entanglement with entities in the world, and that we should therefore seek to regress to some sort of pre-modern, pre-rational form of society. While there are a plethora of passages in both SZ and in later works such as the DL which contradict this Romantic, mythological reading of Heidegger, it is necessary not to overlook this very real ambivalence in his thought. This ambivalence, I think, derives from Heidegger’s failure to differentiate the non-conceptual, the non-rational, the non-discursive, into its pre- and trans- modes. Michael Zimmerman, appropriating Ken Wilber’s “pre-/trans- fallacy,” notes that one must first be an ordinary egoic subject before existing authentically as the transpersonal clearing, within which something like ‘personhood’ can manifest itself. In other words, before one can become ‘no one,’ one must first be ‘some one.’ Recognizing the constructed nature of the egoic subject is possible only insofar as such a subject has been constructed in the first place (Zimmerman, 2000, 140).lxxxiv                                                                                                                  Put differently: it is one thing to have mastered reason, experienced its inherent limitations and empty claims to totality and self-consistency, and transcended it, what Heidegger calls meditative thinking, or thinking from Being; quite another to have never bent oneself to its rule. The former is trans-conceptual thinking, the latter is pre-conceptual.
The relevance of this strain in Heidegger’s thought to Zen is crucial. Zen readily admits the bankruptcy of reason’s attempts to calculate existence and treat entities as, in Kant’s terminology, transcendentally real, or in Heidegger’s parlance, as present-at-hand, yet this emptiness of phenomena is at once the emptiness of the ego. There is, for Zen, quite literally a world of difference between the pre-egoic—which is a jumble of drives, perceptions, and intentional comportments that have not yet congealed into a relatively stable self—and the trans-egoic—which, after attaining the sense of personal identity and assuming the notion of a soul substance persisting over time, confronts its own nothingness and transcends the illusion of a separate self. The space between is the very same rational-ego whose ignorance about its own being is deconstructed in SZ. However, Zen goes further than Heidegger in denying what duality lingers in the subjectivist metaphysics of his early work and the ontological difference of the later works through the doctrine of an-atman (no-self).
The key difference is that Zen has an attendant set of psychophysical practices that train the mind.lxxxv This is a training regimen that has successfully been passed down for centuries. It has taken root and flourished in Chinese, Japanese, Korean, Vietnamese, and American cultures. The nature of mind—“no-mind”—is directly communicated from teacher to student. The sangha is the intersubjective space in which this exchange takes place. The key here is that the process does not consist in the dogmatic imposition of a set of allegedly eternal truths, i.e., facts about the world, which belong to the domain of the mythos and the logos, apprehended through faith or reason. The individual is not asked to uncritically swallow the assertions of “the They,” but is instead invited to perform the experiment, to test his findings in a community of the adequate, and to confirm/refute those findings based on his own empirical research. Heidegger resists signing off on any such set of practices, because they seem to suggest a calculative, scientific, and technological kind of thinking that does violence to and covers up the mystery of Being, that commercializes and thus de-sacralizes a secret: “the program of mathematics and the experiment are grounded in the relation of man as ego to the thing as object” (Heidegger, 1966, 79).lxxxvi However, the truth of Zen is something to be experientially verified in the laboratory of one’s own awareness by performing the experiment called meditation. This is why Suzuki described Zen as a “radical empiricism” (Barrett, 1956, 140).lxxxvii
The overblown tendency to destabilize, unsettle, and disturb which permeates Heidegger’s work as a whole makes it all but impossible for any such healthy institutional incarnation or individual transformation to occur. This deconstructive tendency is so bent on the negative tasks of inverting stodgy hierarchies, delimiting conceptual binaries, liberating excluded middles and drilling holes through master narratives that it never constructs anything. It is hard enough handing “no-thingness” down, and harder still when one refuses to prescribe any methods by which to transmit it or to consider the legitimacy of “foreign” methods. Such is the world of difference between handing down no-thingness and passing on nothing.








Buddha’s Brain

Buddha’s Brain

Rick Hanson, PH.D., with Richard Mendius, MD

…no one yet knows exactly how the brain makes the mind, or how—as Dan Siegal puts it—the mind uses the brain to make the mind. It’s sometimes said that the greatest remaining scientific questions are: What caused the Big Bang? What is the grand unified theory that integrates quantum mechanics and general relativity? And what is the relationship between mind and the brain, especially regarding conscious experience? The last question is up there with the other two because it is as difficult to answer and as important.

…It could be 350 years, and maybe longer, before we completely understand the relationship between the brain and the mind. But meanwhile, a working hypothesis is that the mind is what the brain does.

Therefore, an awakening mind means an awakening brain…

The Causes Of Suffering

Although life has many pleasures and joys, it also contains considerable discomfort and sorrow—the unfortunate side effect of three strategies that evolved to help animals, including us, pass on their genes. For sheer survival, these strategies work great, but they also lead to suffering…To summarize, whenever a strategy runs into trouble, uncomfortable—sometimes even agonizing—alarm signals pulse through the nervous system to set the animal back on track. But trouble comes all the time, since each strategy contains inherent contradictions, as the animal tries to:

Separate what is actually connected, in order to create a boundary between itself and the world

Stabilize what keeps changing, in order to maintain its internal systems within tight ranges

Hold onto fleeting pleasures and escape inevitable pains, in order to approach opportunities and avoid threats.

Most animals don’t have nervous systems complex enough to allow these strategies’ alarms to grow into significant distress. But our vastly more developed brain is fertile ground for a harvest of suffering. Only we humans worry about the future, regret the past, and blame ourselves for the present.

We get frustrated when we can’t have what we want, and disappointed when what we like ends. We suffer that we suffer. We get upset about being in pain, angry about dying, sad about waking up sad yet another day. This kind of suffering—which encompasses most of our unhappiness and dissatisfaction—is constructed by the brain. It is made up. Which is ironic, poignant—and supremely hopeful.

For if the brain is the cause of suffering, it can also be its cure.

Virtue, Mindfullness, And Wisdom

More than two thousand years ago, a young man named Siddhartha…spent many years training his mind and thus his brain. On the night of his awakening, he looked deep inside his mind…and saw there both the causes of suffering and the path to freedom from suffering. Then, for forty years, he wandered northern India, teaching all who would listen how to:

Cool the fires of greed and hatred to live with integrity

Steady and concentrate the mind to see through its confusions

Develop liberating insight

In short, he taught virtue, mindfullness…and wisdom. These are the three pillars of Buddhist practice, as well as the wellsprings of everyday well-being, psychological growth, and spiritual realization.

Virtue simply involves regulating your actions, words, and thoughts to create benefits rather than harms for yourself and others. In your brain, virtue draws on top-down direction from the prefrontal cortex (PFC)…Virtue also relies on bottom-up calming from the parasympathetic nervous system and positive emotions from the limbic system…

Mindfullness involves the skillful use of attention to both your inner and outer worlds. Since your brain learns mainly from what you attend to, mindfullness is the doorway to taking in good experiences and making them a part of yourself…

Wisdom is applied common sense, which you acquire in two steps. First, you come to understand what hurts and what helps—in other words, the causes of suffering and the path to its end…Then, based on this understanding, you let go of those things that hurt and strengthen those that help…As a result, over time you’ll feel more connected with everything, more serene about how all things change and end, and more able to meet pleasure and pain without grasping after the one and struggling with the other…[and} finally…what is perhaps the most seductive and subtle challenge to wisdom: the sense of being a self who is separate from and vulnerable to the world.

Regulation, Learning, And Selection

Virtue, mindfullness, and wisdom are supported by the three fundamental functions of the brain: regulation, learning, and selection. Your brain regulates itself—and other bodily systems—through a combination of excitatory and inhibitory activity: green lights and red lights. It learns through forming new circuits and strengthening or weakening existing ones. And it selects whatever experience has taught it to value; for example, even an earthworm can be trained to pick a particular path to avoid an electric shock.

Nonetheless, each pillar of practice corresponds quite closely to one of the three fundamental neural functions. Virtue relies heavily on regulation, both to excite positive inclinations and to inhibit negative ones. Mindfullness leads to new learning—since attention shapes neural circuits—and draws upon past learning to develop a steadier and more concentrated awareness. Wisdom is a matter of making choices, such as letting go of lesser pleasures for the sake of greater ones. Consequently, developing virtue, mindfullness, and wisdom in your mind depends on improving regulation, learning, and selection in your brain. Strengthening the three neural functions…thus buttresses the pillars of practice.

Inclining The Mind

When you set out on the path of awakening, you begin wherever you are…Some traditions describe this process as an uncovering of the true nature that was always present; others frame it as a transformation of your mind and body…

On the other hand, your true nature is both a refuge and a resource for the sometimes difficult work of psychological growth…It’s a remarkable fact that the people who have gone the very deepest into the mind—the sages and saints of every religious tradition—all say essentially the same thing; your fundamental nature is pure, conscious, peaceful, radiant, loving, and wise, and it is joined in mysterious ways with the ultimate underpinnings of reality, by whatever name we give That. Although your true nature may be hidden momentarially by stress and worry, anger and unfulfilled longings, it still continues to exist. Knowing this can be a great comfort.

On the other hand, working with the mind and body to encourage the development of what’s wholesome—and the uprooting of what’s not—is central to every path of psychological…development. Even if practiced is a matter of ‘removing the obscurations’ to true nature…the clearing of these is a progressive process of training, purification, and transformation. Paradoxically, it takes time to become what we already are. [It takes time to personally actualize universal potentials]

In either case, these changes in the mind—uncovering inherent purity and cultivating wholesome qualities—reflect changes in the brain. By understanding better how the brain works and changes—ow it gets emotionally hijacked or settles into calm virtue; how it creates distractibility or fosters mindful attention; how it makes harmful choices or wise one—you can take more control of your brain, and therefore your mind…

The Evolution Of Suffering

…To make any problem better, you need to understand its causes. That’s why all the great physicians, psychologists, and spiritual teachers have been master diagnosticians. For example, in his Four Noble Truths, the Buddha identified an ailment (suffering), diagnosed its cause (craving: a compelling sense of need for something), and prescribed a treatment (the Eightfold Path)….

The Evolving Brain

Life began around 3.5 billion years ago. Multicelled creatures first appeared about 650 million years ago…By the time the first jellyfish arose about 600 million years ago, animals had grown complex enough that their sensory and motor systems needed to communicate with each other; thus the beginnings of neural tissue. As animals evolved, so did their nervous systems, which slowly developed a central headquarters in the form of a brain.

Evolution builds on preexisting capabilities. Life’s progression can be seen inside your brain, in terms of what Paul MacClean (1990) referred to as the reptilian, paleomammalian, and neomammalian levels of development…

Cortical tissues that are relatively recent, complex, conceptualizing, slow, and motivationally diffuse sit atop subcortical; and brain-stem structures that are ancient, simplistic, concrete, fast, and motivationally intense. (The subcortical region lies in the center of your brain, beneath the cortex and on top of the brain-stem; the brain stem roughly corresponds to the “reptilian brain.” As you go through your day, there’s a kind of lizard-squirrel-monkey brain in your head shaping your reactions from the bottom up.

Nonetheless, the modern cortex has great influence over the rest of the brain, and its been shaped by evolutionary pressures to develop ever-improving abilities to parent, bond, communicate, and love.

The cortex is divided up into two “hemispheres” connected by the corpus callosum. As we evolved, the left hemisphere (in most people) came into focus on sequential and linguistic processing while the right hemisphere specialized in holistic and visual processing; of course, the two halves of your brain work closely together. Many neural structures are duplicated so that there is one in each hemisphere; nonetheless, the usual convention is to refer to a structure in the singular…

Three Survival Strategies

Over hundreds of millions of years of evolution, our ancestors developed three fundamental strategies for survival:

Creating separations—in order to form boundaries between themselves and the world, and between one mental state and another.

Maintaining stability—in order to keep physical and mental systems in a healthy balance.

Approaching opportunities and avoiding threats—in order to gain things that promote offspring, and escaping or resisting things that don’t.

These strategies have been extraordinarily effective for survival. But Mother Nature doesn’t care how they feel. To motivate animals, including ourselves, to follow these strategies and pass on their genes, neural networks evolved to create pain and distress under certain conditions: when separations break down, stability is shaken, opportunities disappoint, and threats loom. Unfortunately these conditions happen all the time, because:

Everything is connected.

Everything keeps changing.

Opportunities routinely remain unfulfilled or lose their luster, and many threats are inescapable (aging and death).

Not So Separate

The parietal lobes of the brain are located in the upper back of the head (a ‘lobe’ is a rounded swelling of the cortex). For most people, the left lobe establishes that the body is distinct from the world, and the right lobe indicates where the body is compared to features in its environment. The result is an automatic, underlying assumption along the lines of I am separate and independent. Although this is true in some ways, in many important ways it is not.

Not So Distinct

To live, an organism must metabolize: it must exchange matter and energy with its environment. Consequently, over the course of a year, many of the atoms in you body are replaced with new ones. The energy you use to get a drink of water comes from sunshine working its way up to you through the food chain—in a real sense, light lifts the cup to your lips. The apparent wall between you body and the world is more like a picket fence.

And between your mind and the world, it’s like a line painted on the sidewalk. Language and culture enter and pattern your mind from the moment of birth. Empathy and love naturally attune you to other people, so your mind moves into resonance with theirs. These flows of mental activity go both ways as you influence others.

Within your mind, there are hardly any lines at all. All its contents flow into each other, sensations becoming thoughts feelings desires actions and more sensations. This stream of consciousness correlates with a cascade of fleeting neural assemblies, each assembly dispersing into the next one, often in less than a second.

Not So Independent

…Most of the atoms in your body—including the oxygen in your lungs and the iron in your blood—were born inside a star. In the early universe, hydrogen was just about the only element. Stars are giant fusion reactors that pound together hydrogen atoms, making heavier elements and releasing lots of energy in the process. The ones that went supernova spread their contents far and wide. By the time our solar system started to form, roughly nine billion years after the universe began, enough large atoms existed to make our planet, to make the hands that hold this book and the brain that understands these words. Truly, you’re here because a lot of stars blew up. Your body is made of stardust.

Your mind also depends on countless preceding causes. Think of life events and people that have shaped your views, personality, and emotions. Imagine having been switched at birth and raised by poor sharecropers in Kenya or a wealthy oil family in Texas; how different would your mind be today?

The Suffering Of Separation

Since we are each connected and interdependent with the world, our attempts to be separate and independent are regularly frustrated, which produces painful signals of disturbance and threat. Further, even when our efforts are temporarily successful, they still lead to suffering. When you regard the world as ‘not me at all,’ it is potentially unsafe, leading you to fear and resist it. Once you say, ‘I am this body apart from the world,’ the body’s frailties become your own. If you think it weighs too much or doesn’t look right, you suffer. If it’s threatened by illness, aging, and death—as all bodies are—you suffer.

Not So Permanent

Your body, brain and mind contain vast numbers of systems that must maintain a healthy equilibrium. The problem, though, is that changing conditions disturb these systems, resulting in signals of threat, pain, and distress—in a word, suffering.

We Are Dynamically Changing Systems

Let’s consider a single neuron, one that releases the neurotransmitter serotonin. This tiny neuron is both part of the nervous system and a complex system in its own right that requires multiple subsystems to keep it running. When it fires, tendrils at the end of its axon expel a burst of molecules into the synapses—the connections—it makes with other neurons. Each tendril contains about two hundred little bubbles called vesicles that are full of he neurotransmitter serotonin. Every time the neuron fires, five to ten vesicles spill open. Since a typical neuron fires around ten times a second, the serotonin vesicles of each tendril are emptied out every few seconds.

Consequently, busy little molecule machines must either manufacture new serotonin or recycle loose serotonin floating around the neuron. Then they need to build vesicles, fill them with serotonin, and move them close to where the action is, at the tip of each tendril. That’s a lot of processes to keep in balance, with many things that could go wrong—and serotonin metabolism is just one of the thousands of systems in your body…

The Challenges Of Maintaining Equilibrium

For you to stay healthy, each system in your body and mind must balance two conflicting needs. On the one hand it must remain open to inputs during ongoing transactions with its local environment; closed systems are dead systems. On the other hand, each system must also preserve a fundamental stability, staying centered around a good set-point and within certain ranges—not too hot, nor too cold. For example, inhibition from the prefrontal cortex (PFC) and arousal from the limbic system must balance each other: too much inhibition and you feel numb inside, too much arousal and you feel overwhelmed.

Signals Of Threat

To keep your systems in balance, sensors register its state (as the thermometer does inside a thermostat) and send signals to regulators to restore equilibrium if the system gets out of range…But some signals for corrective action are so important that they bubble up into consciousness. For example, if your body gets too cold, you feel chilled; if it gets too hot, you feel like you’re baking.

These consciously experienced signals are unpleasant, in part because they carry a sense of threat—a call to restore equilibrium before things slide too far too fast down the slippery slope. The call may come softly, with a sense of unease, or loudly with alarm, even panic. However it comes, it mobilizes your brain to do whatever it takes to get you back in balance.

This mobilization usually comes with feelings of craving; these range from quiet longings to a desperate sense of compulsion. It is interesting that the word for craving in Pali—the language of early Buddhism—is tanha, the root of which means thirst. The word “thirst” conveys the visceral power of threat signals, even when they have nothing to do with life or limb, such as the possibility of being rejected. Threat signals are effective precisely because they’re unpleasant—because they make you suffer, sometimes a little, sometimes a lot. You want them to stop.

Everything Keeps Changing

Occasionally, threat signals do stop for a while—just as long as every system stays in balance. But since the world is always changing, there are endless disturbances in the equlibria of your body, mind, and relationships. The regulators of the systems of your life from the molecular bottom all the way up to the interpersonal top, must keep trying to impose static order on inherently unstable processes.

Consider the impermanence of the physical world, from the volatility of quantum particles to our own Sun, which will someday swell into a red giant and swallow the Earth. Or considers the turbulence of your nervous systems: for example, regions of the PFC that support consciousness are updated five to eight times a second.

This neurological instability underlies all states of mind. For example, every thought involves a momentary partitioning of streaming neural traffic into a coherent assembly of synapses that must soon disperse into fertile disorder to allow other thoughts to emerge. Observe even a single breath, and you will experience its sensations changing, dispersing, and disappearing soon after they arise.

Everything changes. That’s the universal nature of outer reality and inner experience. Therefore, there’s no end to disturbed equilibria as long as you live. But to help you survive, your brain keeps trying to stop the river, struggling to hold dynamic systems in place, to find fixed patterns in the variable world, and to construct permanent plans for changing conditions. Consequently, your brain is forever chasing after the moment that has just past, trying to understand and control it.

It’s as if we live at the edge of a waterfall, with each moment rushing at us—experienced only and always now at the lip—and then zip, it’s over the edge and gone. But the brain is forever clutching at what has just surged by.

Not So Pleasant Or Painful

In order to pass on their genes, our animal ancestors had to choose correctly many times a day whether to approach something or avoid it. Today, humans approach and avoid mental states as well as physical objects; for example, we pursue self-worth and push away shame. Nonetheless, for all its sophistication, human approaching and avoiding draws on much the same neural circuitry used by a monkey to look for bananas or a lizard to hide under a rock.

The Feeling Tone Of Experience

How does your brain decide if something should be approached or avoided? Let’s say you’re walking in the woods; you round a bend and suddenly see a curvy shape on the ground right smack in front of you. To simplify a complex process, during the first few tenths of a second, light bouncing off this curved object is sent to other occipital cortex…for processing into a meaningful image. Then the occipital cortex sends representations of this image in two directions: to the hippocampus, for evaluation as a potential threat or opportunity, and to the PFC and other parts of the brain for more sophisticated—and time-consuming—analysis.

Just in case, your hippocampus immediately compares the image to its short list of jump-first-think-later dangers. It quickly finds curvy shapes on its danger list, causing it to send a high-priority alert to the amygdala: “Watch out!” The amygdala—which is like an alarm bell—then pulses both a general warning throughout your brain and a special fast-track signal to your flight-or-fight neural and hormonal systems…

Meanwhile, the powerful but relatively slow PF has been pulling information out of long-term memory to figure out whether the darn thing is a snake or a sticks. As a few more seconds tick by, the PFC zeros in on the object’s inert nature—and the fact that several people ahead of you walked past it without saying anything—and concludes that it’s only a stick.

Throughout this episode, everything you experienced was either pleasant, unpleasant, or neutral. At first there were neutral or pleasant sights as you strolled along the path, then unpleasant fear at a potential snake, and finally pleasant relief at the realization that it was just a stick. That aspect of experience—whether it is pleasant, unpleasant, or neutral—is called, in Buddhism, its feeling tone (or, in Western psychology, its hedonic tone). The feeling tone is produced mainly by your amygdala and then broadcast widely. It’s a simple but effective way to tell your brain as a whole what to do each moment: approach pleasant carrots, avoid unpleasant sticks, and move on from anything else.

Chasing Carrots

Two major systems keep you chasing carrots. The first system is based on the neurotransmitter dopamine. Dopamine-releasing neurons become more active when you encounter things that are linked to rewards in the past—for example, if you get a message from a good friend you haven’t seen in a few months. These neurons rev up when you encounter something that could offer rewards in the future—such as your friend saying she wants to take you to lunch. In your mind, this neural activity produces a motivating sense of desire: you want to call her back. When you do have lunch, a part of your brain called the cingulate cortex (about the size of your finger, on the interior edge of each hemisphere) tracks whether the rewards you expect—fun with your friend, good food—actually arrived. If they do, dopamine levels stay steady. But if you’re disappointed—maybe your friend is in a bad mood—the cingulate sends out a signal that lowers dopamine levels. Falling dopamine registers in subjective experience as an unpleasant feeling tone—a dissatisfaction and discontent—that stimulates craving (broadly defined) for something that will restore its levels.

The second system, based on several other neurotransmitters, is the biochemical source of the pleasant feeling tones that come from the actual—and anticipated—carrots of life. When these ‘pleasure chemicals’—natural opoids (including endorphins), oxytocin, and norepinephrine—surge into your synapses, they strengthen the neural circuits that are active, making them more likely to fire together in the future. Imagine a toddler trying to eat a spoonful of pudding. After many misses, his perceptual-motor neurons finally get it right, leading to wavers of pleasure chemicals which help cement the synaptic connections that created the specific movements that slipped the spoon into his mouth.

In essence, this pleasure system highlights whatever triggered it, prompts you to pursue those rewards again and strengthen the behaviors that make you successful in getting them. It works hand in hand with the dopamine-based system. For example, slaking your thirst feels good both because the discontent of low dopamine leaves, and because the pleasure chemical—based joy of cool water on a hot day arrives…

Sticks Are Stronger Than Carrots

So far, we’ve discussed carrots and sticks as if they were equals. But actually, sticks are usually more powerful, since your brain is built more for avoiding than for approaching. That’s because it’s the negative experiences, not the positive ones, that have generally had the most impact on survival.

For example, imagine our mammalian ancestors dodging dinosaurs in a worldwide Jurassic Park 70 million years ago. Constantly looking over their shoulders, alert to the slightest crackle of brush, ready to freeze or bolt or attack depending on the situation. The quick and the dead. If they missed out on a carrot—a chance for food or mating, perhaps—they usually had other opportunities later. But if they failed to duck a stick—like a predator—then they’d probably be killed, with no chance at any carrots in the future. The ones that lived to pass on their genes paid a lot of attention to negative experiences.

Let’s explore six ways your brain keeps you dodging sticks.

Vigilance And Anxiety

When you’re awake and not doing anything in particular, the baseline resting state of your brain activates a “default network” and one of its functions seems to be tracking you environment and body for possible threats. This basic awareness is often accompanied by a background feeling of anxiety that keeps you vigilant. Try walking through a store for a few minutes without the least whiff of caution, unease, or tension. It’s very difficult.                                  This makes sense because our mammalian, primate, and human ancestors were prey as well as predators. In addition, most primate social groups have been full of aggression from males and females alike. And in the hominid and then human hunter-gatherer bands of the past couple million years, violence has been a leading cause of death for men. We become anxious for good reason: there was a lot to fear.

Sensitivity To Negative Information

The brain typically detects negative information faster than positive information. Take facial expressions, a primary signal of threat or opportunity for a social animal like us: fearful faces are perceived much more rapidly than happy or neutral ones, probably fast-tracked by the amygdala. In fact, even when researchers make fearful faces invisible to conscious awareness, the amygdala lights up. The brain is drawn to bad news.

High-Priority Storage

When an event is flagged as negative, the hippocampus makes sure it’s stored carefully for future reference. Once burned, twice shy. Your brain is like Velcro for negative experiences and Teflon for positive ones—even though most of your experiences are probably neutral or positive.

Negative Trumps Positive

Negative events generally have more impact than positive ones. For example, it’s easy to acquire feelings of learned helplessness from a few failures, but hard to undo those feeling, even with many successes. People will do more to avoid a loss than to acquire a comparable gain. Compared to lottery winners, accident victims usually take longer to return to their original baseline of happiness. Bad information about a person carries more weight than good information, and in relationships, it typically takes about five positive interactions to overcome the effects of a single negative one.

Lingering Traces

Even if you’ve unlearned a negative experience, it still leaves an indelible trace in your brain. That residue lies waiting, ready to reactivate if you ever encounter a fear-provoking event like the previous one.

Vicious Cycles

Negative experiences create vicious cycles by making you pessimistic, overreactive, and inclined to go negative yourself.

Avoiding Involves Suffering

As you can see, your brain has a build-in “negativity bias” that primes you for avoidance. This bias makes you suffer in a variety of ways. For starters, it generates an unpleasant background of anxiety, which for some people can be quite intense; anxiety also makes it harder to bring attention inward for self-awareness or contemplative practice, since the brain keeps scanning to make sure there is no problem. The negativity bias fosters or intensifies other unpleasant emotions, such as anger, sorrow, depression, guilt, and shame. It highlights past losses and failures, it downplays present abilities, and it exaggerates future obstacles. Consequently, the mind continually tends to render unfair verdicts about a persons character, conduct, and possibilities. The weight of those judgments can really wear you down.

In The Simulator

In Buddhism, it’s said that suffering is the result of craving expressed through the Three Poisons: greed, hatred, and delusion. These are strong, traditional terms that cover a broad range of thoughts, words, and deeds, including the most fleeting and subtle. Greed is a grasping after carrots, while hatred is an aversion to sticks; both involve craving more pleasure and less pain. Delusion is a holding onto ignorance about the way things really are—for example, not seeing how they’re connected and changing.

Virtual Reality

Sometimes these poisons are conspicuous; much of the time, however, they operate in the background of your awareness, firing and wiring quietly along. They do this by using your brain’s extraordinary capacity to represent both inner experience and the outer world. For example, the blind spots in your left and right visual fields don’t look like holes out there in the world; rather, your brain fills them in, much like photo software shades in the red eyes of people looking toward a flash. In fact, much of what you see “out there” is actually manufactured “in here” by your brain, painted in like computer-generated graphics in a movie. Only a small fraction of the inputs to your occipital lobe comes directly from the external world; the rest comes from internal memory stores and perceptual-processing modules. Your brain simulates the world—each of us lives in a virtual reality that’s close enough to the real thing that we don’t bump into the furniture.

Inside this simulator—whose neural substrate appears to be centered in the upper-middle of your PFC—minimovies run constantly. These brief clips are the building blocks of much conscious activity. For our ancestors, running simulations of past events promoted survival, as it strengthened the learning of successful behaviors by repeating their neural firing patterns. Simulating future events also promoted survival, as it strengthened the learning of successful behaviors by repeating their neural firings. Simulating future events also promoted survival by enabling our ancestors to compare possible outcomes –in order to pick the best approach—and to ready potential sensory-motor sequences for immediate action. Over the past three million years, the brain tripled in size; much of this expansion has improved the capacities of the simulator, suggesting its benefits for survival…

Simulations Make You Suffer

The brain continues to produce simulations today, even when they have nothing to do with staying alive. Watch yourself daydream or go back over a relationship problem, and you’ll see the clips playing—little packets of simulated experiences, usually just seconds long. If you observe them closely, you’ll spot several troubling things:

By its very nature, the simulator pulls you out of the present moment. There you are, following a presentation at work, running an errand, or meditating, and suddenly your mind is a thousand miles away, caught up in a mini-movie. But it’s only in the present moment that we find real happiness, love, or wisdom.

In the simulator, pleasures usually seem pretty great, whether you’re considering a second cupcake or imagining a response you’ll get to a report at work. But what do you actually feel when you reenact the mini-movie in real life? Is it as pleasant as promised up there on the screen? Usually not. In truth, most everyday rewards aren’t as intense as those conjured up in the simulator.

Clips in the simulator contain lots of beliefs: Of course he’ll say X if I say Y…It’s obvious that they let me down. Sometimes these are explicitly verbalized, but much of the time they’re implicit, built into the plotting. In reality, are the implicit and explicit beliefs in your simulations true? Sometimes yes, but often no. Mini-movies keep us stuck by their simplistic view of the past and by their defining out of existence real possibilities for the future, such as new ways to reach out to others or dream big dreams. Their beliefs are the bars of an invisible cage, trapping you in a life that’s smaller than the one you could actually have. It’s like being a zoo animal that’s released into a large park—yet still crouches withing the confines of its old pen.

In the simulator, upsetting events from the past play again and again, which unfortunately strengthens the neural associations between an event and its painful feelings. The simulator also forecasts threatening situations in your future. But in fact, most of those worrisome events never materialize. And of the ones that do, often the discomfort you experience is milder and briefer than predicted. For example, imagine speaking from your heart; this may trigger a mini-movie ending in rejection and you feeling bad. But in fact, when you do speak from the heart, doesn’t it typically go pretty well, with you ending up feeling quite good?

In sum, the simulator take you out of the present moment and sets you chasing after carrots that aren’t really so great while ignoring more important rewards (such as contentment and inner peace). Besides reinforcing painful emotions, they have you ducking sticks that never actually come your way or aren’t really all that bad. And the simulator does this hour after hour, day after day, even in your dreams—steadily building neural structures, much of which adds to your suffering.


Each person suffers sometimes, and many people suffer a lot. Compassion is a natural response to suffering, including your own. Self-compassion isn’t self-pity, but it is simply warmth, concern, and good wishes—just like compassion for another person. Because self-compassion is more emotional than self-esteem, it’s actually more powerful for reducing the impact of difficult conditions, preserving self-worth, and building resilience. It also opens your heart, since when you’re closed to your own suffering it’s hard to be receptive to the suffering of others.

In addition to the everyday suffering of life, the path of awakening itself contains difficult experiences which also call for compassion. To become happier, wiser, and more loving, sometimes you have to swim against ancient currents within your nervous system. For example, in some ways the three pillars of practice seem unnatural: virtue restrains emotional reactions that worked well in the Serengeti, mindfullness decreases external vigilance and wisdom cuts through beliefs that once helped us survive. It goes against the evolutionary template to undo the causes of suffering, to feel one with all things, to flow with the changing moment, and to remain unmoved by pleasant and unpleasant alike.

Of course, that doesn’t mean we shouldn’t do it! It just means we should understand what we’re up against and have some compassion for ourselves.

To nurture self-compassion and strengthen its neural circuits:

Recall being with someone who really loves you—the feeling of receiving caring activates the deep attachment system circuitry in your brain, priming it to give compassion.

Bring to mind someone you naturally feel compassion for, such as a child or a person you love—this easy flow of compassion arouses its neural underpinnings (including oxytocin, the insula [which senses the internal state of your body], and the PFC), “warming them up” for self-compassion.

Extend this same compassion to yourself—be aware of your own suffering and extend concern and good wishes toward yourself; sense compassion sifting down into raw places inside, falling like a gentle rain that touches everything. The actions related to a feeling strengthen it, soplace your palm on your cheek or heart with the tenderness and warmth you’d give a hurt child. Say phrases in your mind such as May I be happy again. May the pain of this moment pass.

Overall, open to the sense that you are receiving compassion—deep down in your brain, the actual source of good feelings doesn’t matter much; whether the compassion is from you or from another person, let your sense of being soothed and cared for sink in.

The First and Second Dart

Ultimately, happiness comes down to choosing between the discomfort of becoming aware of your mental afflictions and the discomfort of being ruled by them.

–Yongey Mingyur Rinpoche

Some physical discomfort is unavoidable; its a crucial signal to take action to protect life and limb, like the pain that makes you pull your hand back from a hot stove. Some mental discomfort is inevitable, too. For example, as we evolved, growing emotional investments in children and other members of the band motivated our ancestors to keep those carriers of their genes alive; understandably, then, we feel distress when dear ones are threatened and sorrow when they are harmed. We also evolved to care greatly about our place in the band and in the hearts of others, so it’s normal to feel hurt if you’re rejected or scorned.

To borrow an expression from the Buddha, inescapable physical or mental discomfort is the “first dart” of existence. As long as you live and love, some of those darts will come your way.

The Dart We Throw Ourselves

First darts are unpleasant to be sure. But then we add our reactions to them. These reactions are “second darts”—the ones we throw ourselves. Most of our suffering comes from the second darts.

Suppose you’re walking through a dark room at night and stub you toe on a chair; right after the first dart of pain comes a second dart of anger: “Who moved that darn chair?” Or maybe a loved one is cold to you when you’re hoping for some caring; in addition to the natural drop in the pit of you stomach (first dart), you might feel unwanted (second dart) as a result of having been ignored as a child.

Second darts often trigger more second darts through associative neural networks: you might feel guilt about your anger that someone moved the chair, or sadness that you feel hurt yet again by someone you love. In relationships, second darts create vicious cycles; your second dart reactions from the other person, which set off more second darts from you, and so on.

Remarkably, most of our second-dart reactions occur when there is in fact no first dart anywhere to be found—when there’s no pain inherent in the conditions we’re reacting to. We add suffering to them.

For example, sometimes I’ll come home from work and the house will be a mess, with the kid’s stuff all over. That’s the condition. Is there a dart in the coats and shoes on the sofa or the clutter covering the counter? No, there isn’t; no one dropped a brick on me or hurt my children. Do I have to get upset? Not really. I could ignore the stuff, pick it up calmly, or talk with them about it. Sometimes I manage to handle it that way. But if I don’t, then the second darts start landing, tipped with the Three Poisons: greed, makes me rigid about how I want things to be, hatred gets me all bothered and angry, and delusion tricks me into taking the situation personally.

Saddest of all, some second-dart reactions are to conditions that are actually positive. If someone pays you a compliment, that’s a positive situation. But then you might start thinking, with some nervousness and even a little shame: Oh, I’m not really that good a person. Maybe they’ll find out I’m a fraud. Right there, needless second-dart suffering begins.

Heating Up

Suffering is not abstract or conceptual. It’s embodied: you feel it in your body, and it proceeds through bodily mechanisms. Understanding the physical machinery of suffering will help you to see it increasingly as an impersonal condition—unpleasant to be sure, but not worth getting upset about, which would just bring more second darts.

Suffering cascades through your body via the sympathetic nervous system (SNS) and the hypothalamic-pituitary-adrenal axis (HPAA) of the endocrine (hormonal) system. Let’s unscramble this alphabet soup to see how it all works. While the SNS and the HPAA are anatomically distinct, they are so intertwined that they’re best described together, as an integrated system. And we’ll focus on reactions dominated by an aversion to sticks (e.g., fear, anger) rather than a grasping for carrots, since aversive reactions usually have a bigger reaction to the negativity bias of the brain.

Alarms Go Off

Something happens. It might be a car suddenly cutting you off, a put-down from a coworker, or even just a worrisome thought. Social and emotional conditions can pack a wallop like physical ones since psychological pain draws on many of the same neural networks as physical pain; this is why getting rejected can feel as bad as a root canal. Even just anticipating a challenging event—such as giving a talk next week—can have as much impact as living through it for real. Whatever the source of the threat, the amygdala sounds the alarm, setting off several reactions:

The thalamus—the relay station in the middle of you brain—sends a “Wake up!” signal to your brain stem, which in turn releases stimulating norepinephrine throughout your brain.

The SNS sends signals to the major organs and muscle groups in your body, readying them for fighting or fleeing.

The hypothalamus—the brain’s primary regulator of the endocrine system—prompts the pituitary gland to signal the adrenal glands to release the “stress hormones” epinephrine (adrenaline) and cortisol.

Ready For Action

Within a second or two of the initial alarm, your brain is on red alert, your SNS is lit up like a Christmas tree, and stress hormones are washing through your blood. In other words you’re at least a little upset. What’s going on in your body?

Epinephrine increases your heart rate (so your heart can move more blood) and dilates your pupils (so your eyes gather more light). Norepinephrine shunts blood to large muscle groups. Meanwhile, the bronchioles of your lungs dilate for increased gas exchange—enabling you to hit harder or run faster.

Cortisol suppresses the immune system to reduce inflammation from wounds. It also revs up stress reactions in two circular ways. First, it causes the brain stem to stimulate the amygdala further, which increases amygdala activation of the SNS/HPAA system—which produces more cortisol. Second, cortisol suppresses hippocampal activity (which normally inhibits the amygdala); this takes the brakes off the amygdala, leading to yet more cortisol.

Reproduction is sidelined—no time for sex when you’re running for cover. The same for digestion: salivation decreases and peristalsis slows down, so your mouth feels dry and you become constipated.

Your emotions intensify, organizing and mobilizing the whole brain for action. SNS/HPAA arousal stimulates the amygdala, which is hardwired to focus on negative information and react intensely to it. Consequently, feeling stressed sets you up for fear and anger.

As limbic and endocrine activation increases, the relative strength of executive control from the PFC declines. It’s like being in a car with a runaway accelerator: the driver has less control over her vehicle. Further, the PFC is also affected by SNS/HPAA arousal, which pushes appraisals, attributions of other’s intentions, and priorities in a negative direction: now the driver of the careening care thinks everybody else is an idiot. For example, consider the difference between your take on a situation when you’re upset and your thoughts about it later when you’re calmer.

In the harsh physical and social environments in which we evolved, this activation of multiple bodily systems helped our ancestors survive. But what’s the cost of this today, with the chronic low-grade stresses of modern life?

Life On Simmer

Getting fired up for good reason—such as becoming passionate and enthusiastic, handling emergencies, or being forceful for a good cause—definitely has its place in life. But second darts are a bad reason to light up the SNS/HPAA system, and if they become routine, they can push the needle on your personal stress meter into the red zone. Further, apart from your individual situation, we live in a pedal-to-the-medal society that relies on nonstop SNS/HPAA activation; unfortunately, this is completely unnatural in terms of our evolutionary template.

For all of these reasons, most of us experience ongoing SNS/HPAA arousal. Even if your pot isn’t boiling over, just simmering along with second-dart activation is quite unhealthy. It continually shunts resources away from long-term projects—such as building a strong immune system or preserving a good mood—in favor of short-term crises. And this has lasting consequences.

Physical Consequences

In our evolutionary past, when most people died by forty or so, the short-term benefits of SNS/HPAA activation outweighed its long term costs. But for people today who are interested in living well during their forties and beyond, he accumulating damage of an overheated life is a real concern. For example, chronis SNS/HPAA stimulation disturbs these systems and increases risks for the health problems listed:

–Gastrointestinal; ulcers, colitis, irritable bowel syndrome, diarrhea, and constipation

–Immune; more frequent colds and flus, slower wound healing, greater vulnerability to serious infections

–Cardiovascular; hardening of the arteries, heart attacks

–Endocrine; type II diabetes, premenstrual syndrome, erectile dysfunction, lowered libido.

Mental Consequences

For all their effects on the body, second-darts usually have their greatest impact on psychological well-being. Let’s see how they work in your brain to raise anxiety and lower mood.


Repeated SNS/HPAA activity makes the amygdala more reactive to apparent threats, which in turn increases SNS/HPAA activation, which sensitizes the amygdala further. The mental correlate of this physical preocess is an increasingly rapid arousal of state anxiety (anxiety based on specific situations).

Additionally, the amygdala helps form implicit memories (traces of past experiences that exist beneath conscious awareness); as it becomes more sensitized, it increasingly shades those residues with fear, thus intensifying trait anxiety (ongoing anxiety reagardless of the situation).

Meanwhile, frequent SNS/HPAA activation wears down the hippocampus, which is vital for forming explicit memories—clear records of what actually happened. Cortisol and related glucocorticoid hormones both weaken existing synaptic connections in the hippocampus and inhibit the formation of new ones. Further, the hippocampus in one of the few regions in the human brain that can actually grow new neurons-yet glucocorticoids prevent the birth of neurons in the hippocampus, imparing its ability to produce new memories.

It’s a bad combination for the amygdala to be oversensitized while the hippocampus is compromized; painful experiences can then be recorded in implicit memory—with all the distortions and turbo-charging of an amygdala on overdrive—without an accurate explicit memory of them. This might feel like: Something happened, I’m not sure what, but I’m really upset. This may help explain why victims of trauma can feel disociated from the awful things they experienced, yet be very reactive to any trigger that reminds them unconsciously of what once occurred. In less extreme situations, the one-two punch of a revved-up amygdala and a weakened hippocampus can lead to feeling a little upset a lot of the time without exactly knowing why.

Depressed Mood

Routine SNS/HPAA activation undermines the biochemical basis of an even-keeled—let alone cheerful—disposition in a number of ways:

–Norepinephrine helps you feel alert and mentally energetic, but glucocorticoid hormones deplete it. Reduced norepinephrine may cause you to feel flat—even apathetic—with poor concentration; these are classic symptoms of depression.

–Over time, glucocorticoids lower the production of dopamine. This leads to a loss of enjoyment of activitiess once found pleasurable; another classic criterion of depression.

–Stress reduces serotonin, probably the most important neurotransmitter for maintaining a good mood. When serotonin drops, so does norepinephrine, which has already been diminished by glucocorticoids. In short, less serotonin means more vulnerability to a blue mood and less alert interest in the world.

An Intimate Process

Of course, our experience of these physiological processes is very intimate. When I’m upset, I sure dont think about all of these biochemical details. But having a general idea of them in the back of my mind helps me appreciate the sheer physicality of a second dart cascade, its impersonal nature and dependence on preceeding causes, and its impermanence.

This understanding is hopeful and motivating. Suffering has clear cause in your brain and body, so if you change its causes you’ll suffer a lot less. And you can change those causes. From this point on, we’re going to focus on how to do just that.

The Parasympathetic Nervous System

So far, we’ve examined how reactions powered by greed and hatred—especially the latter—ripple throgh your brain and body, shaped by the sympathetic nervous system. But the SNS is just one of the three wings of the autonomic nervous system (ANS), which operates mostly below the level of consciousness to regulate many bodily systems and their responses to changing conditions. The other two wings of the ANS are the parasympathetic nervous system (PNS) and the enteric nervous system (which regulates your gastrointestinal system). Let’s focus on the PNS and SNS as they play crucial roles in your suffering—and its end.

The PNS conserves energy in your body and is responsible for ongoing, steady-state activity. It produces a feeling of relaxation, often with a sense of contentment—this is why it’s something called the “rest-and-digest” system, in contrast to the “fight-or-flight” SNS. These two wings of the ANS are connected like a seesaw; when one goes up, the other goes down.

Parasympathetic activation is the normal resting state of your body, brain, and mind. If your SNS were surgically disconnected, you’d stay alive (though you wouldn’t be very useful in an emergency).

If your PNS were disconnected, however, you’d stop breathing and soon die. Sympathetic activation is a change to the baseline of PNS equilibrium in order to respond to a threat or an opportunity. The cooling, steadying influence of the PNS helps you think clearly and avoid hot-headed actions that would harm you or others. The PNS also quiets the mind and fosters tranquility, which supports comtemplative insight.

The Big Picture

The PNS and SNS evolved hand in hand in order to keep animals—including humans—alive in potentially lethal environments. We need both of them.

For example, take five breaths, inhaling and exhaling a little more fully than usual. This is both energizing and relaxing, actvating first the sympathetic system and then the parasympathetic one, back and forth, in a gentle rhythm. Notice how you feel when you’re done. That combination of aliveness and centeredness is the essence of peak performance zone recognized by athletes, busnesspeople, artists, lovers, and meditators. It’s the result of the SNS and PNS, the gas pedal and the brakes, working in harmony together.

Happiness, love, and wisdom aren’t furthered by shutting down the SNS, but rather by keeping the autonomic nervous system as a whole in an optimal state of balance:

–Mainly parasympathetic arousal for a baseline of ease and peacefulness

–Mild SNS activation for enthusiasm, vitality, and wholesome passions

–Occasional SNS spikes to deal with demanding situations, from a great opportunity at work to a late-night call from a teenage who needs a ride home from a party gone bad.

This is your best-odds prescription for a long, productive life. Of course, it takes practice.

A Path Of Practice

As the saying goes, pain is inevitable but suffering is optional. If you can simply stay present with whatever is arising in awareness—whether it’s a first dart or a second one—without reacting further, then you will break the chain of suffering right there. Over time, through training and shaping your mind and brain, you can even change what arises, increasing what’s positive and decreasing what’s negative. In the meantime, you can rest in and be nourished by a growing sense of the peace and clarity in your true nature.

These three processes—being with whateve arises, working with the tendencies of mind to transform them, and takng refuge in the ground of being—are the essential practices of the path of awakening. In many ways they correspond, respectively, to mindfulness, virtue, and wisdom—and to the three fundamental nerual fuctions of learning, regulating, and selecting.

As you deal with different issues on your path of awakening, you’ll repeatedly encounter these stages of growth:

–Stage one—you’re caught in a second-dart reaction and don’t even realize it; your partner forgets to bring mild home and you complain angrily without seeing that your reaction is over the top.

–Stage two—you realize you’ve been hijacked by greed or hatred (in the broadest sense), but cannot help yourself; internally you’re squirming, but you can’t stop grumbling bitterly about the milk.

–Stage three—some aspect of the reaction arises, but you don’t act it out; you feel irritated but remind yourself that your partner does a lot for you already and getting cranky will just make things worse.

–Stage four—the reaction doesn’t even come up, and sometimes you forget you ever had the issue; you understand that there’s no milk, and you calmly figure out what to do now with your partner.

In education, these are known succinctly as unconscious incompetence, conscious incompetence, and unconscious competence. They’re uselful labels for knowing where you are with a given issue. The second stage is the hardest one, and often where we want to quit. So it’s important to keep aiming for the third and fourth stages—just keep at it and you’ll definitely get there!

It takes effort and time to clear old structures and build new ones. I call this the law of little things; although little moments of greed, hatred, and delusion have left residues of suffering in your mind and brain, lots of little moments of practice will replace these Three Poisons and the suffering they cause with happines, love, and wisdom.