June 23, 2010

-page 65-

Internalist theories take the content of a representation to be a matter determined by factors internal to the system that uses it. Thus, what Block (1986) calls 'short-armed' functional role theories are Internalist. Externalist theories take the content of a representation to be determined, in part at least, by factors external to the system that uses it. Covariance theories, as well as telelogical theories that invoke an historical theory of functions, take content to be determined by 'external' factors. Crossing the atomist-holistic distinction with the Internalist-externalist distinction.


Externalist theories (sometimes called non-individualistic theories) have the consequence that molecule for molecule are coincide with the identical cognitive systems might yet harbour representations with different contents. This has given rise to a controversy concerning 'narrow' content. If we assume some form of externalist theory is correct, then content is, in the first instance 'wide' content, i.e., determined in part by factors external to the representing system. On the other hand, it seems clear that, on plausible assumptions about how to individuate psychological capacities, internally equivalent systems must have the same psychological capacities. Hence, it would appear that wide content cannot be relevant to characterizing psychological equivalence. Since cognitive science generally assumes that content is relevant to characterizing psychological equivalence, philosophers attracted to externalist theories of content have sometimes attempted to introduce 'narrow' content, i.e., an aspect or kind of content that is equivalent internally equivalent systems. The simplest such theory is Fodor's idea (1987) that narrow content is a function from contents (i.e., from whatever the external factors are) to wide contents.

All the same, what a person expresses by a sentence is often a function of the environment in which he or she is placed. For example, the disease I refer to by the term like 'arthritis', or the kind of tree I refer to as a 'Maple' will be defined by criteria of which I know next to nothing. This raises the possibility of imagining two persons in rather different environments, but in which everything appears the same to each of them. The wide content of their thoughts and sayings will be different if the situation surrounding them is appropriately different: 'situation' may include the actual objects they perceive or the chemical or physical kinds of object in the world they inhabit, or the history of their words, or the decisions of authorities on what counts as an example, of one of the terms they use. The narrow content is that part of their thought which remains identical, through their identity of the way things appear, regardless of these differences of surroundings. Partisans of wide content may doubt whether any content in this sense narrow, partisans of narrow content believer that it is the fundamental notion, with wide content being explicable in terms of narrow content plus context.

Even so, the distinction between facts and values has outgrown its name: it applies not only to matters of fact vs, matters of value, but also to statements that something is, vs. statements that something ought to be. Roughly, factual statements - 'is statements' in the relevant sense - represent some state of affairs as obtaining, whereas normative statements - evaluative, and deontic ones - attribute goodness to something, or ascribe, to an agent, an obligation to act. Neither distinction is merely linguistic. Specifying a book's monetary value is making a factual statement, though it attributes a kind of value. 'That is a good book' expresses a value judgement though the term 'value' is absent (nor would 'valuable' be synonymous with 'good'). Similarly, 'we are morally obligated to fight' superficially expresses a statement, and 'By all indications it ought to rain' makes a kind of ought-claim; but the former is an ought-statement, the latter an (epistemic) is-statement.

Theoretical difficulties also beset the distinction. Some have absorbed values into facts holding that all value is instrumental, roughly, to have value is to contribute - in a factual analyzable way - to something further which is (say) deemed desirable. Others have suffused facts with values, arguing that facts (and observations) are 'theory-impregnated' and contending that values are inescapable to theoretical choice. But while some philosophers doubt that fact/value distinctions can be sustained, there persists a sense of a deep difference between evaluating, and attributing an obligation and, on the other hand, saying how the world is.

Fact/value distinctions, may be defended by appeal to the notion of intrinsic value, value a thing has in itself and thus independently of its consequences. Roughly, a value statement (proper) is an ascription of intrinsic value, one to the effect that a thing is to some degree good in itself. This leaves open whether ought-statements are implicitly value statements, but even if they imply that something has intrinsic value - e.g., moral value - they can be independently characterized, say by appeal to rules that provide (justifying) reasons for action. One might also ground the fact value distinction in the attributional (or even motivational) component apparently implied by the making of valuational or deontic judgements: Thus, 'it is a good book, but that is no reason for a positive attribute towards it' and 'you ought to do it, but there is no reason to' seem inadmissible, whereas, substituting, 'an expensive book' and 'you will do it' yields permissible judgements. One might also argue that factual judgements are the kind which are in principle appraisable scientifically, and thereby anchor the distinction on the factual side. This ligne is plausible, but there is controversy over whether scientific procedures are 'value-free' in the required way.

Philosophers differ regarding the sense, if any, in which epistemology is normative (roughly, valuational). But what precisely is at stake in this controversy is no clearly than the problematic fact/value distinction itself. Must epistemologists as such make judgements of value or epistemic responsibility? If epistemology is naturalizable, then even epistemic principles simply articulate under what conditions - say, appropriate perceptual stimulations - a belief is justified, or constitutes knowledge. Its standards of justification, then would be like standards of, e.g., resilience for bridges. It is not obvious, however, that there appropriate standards can be established without independent judgements that, say, a certain kind of evidence is good enough for justified belief (or knowledge). The most plausible view may be that justification is like intrinsic goodness, though it supervenes on natural properties, it cannot be analysed wholly in factual statements.

Thus far, belief has been depicted as being all-or-nothing, however, as a resulting causality for which we have grounds for thinking it true, and, all the same, its acceptance is governed by epistemic norms, and, least of mention, it is partially subject to voluntary control and has functional affinities to belief. Still, the notion of acceptance, like that of degrees of belief, merely extends the standard picture, and does not replace it.

Traditionally, belief has been of epistemological interest in its propositional guise: 'S' believes that 'p', where 'p' is a reposition towards which an agent, 'S' exhibits an attitude of acceptance. Not all belief is of this sort. If I trust you to say, I believer you. And someone may believer in Mr. Radek, or in a free-market economy, or in God. It is sometimes supposed that all belief is 'reducible' to propositional belief, belief-that. Thus, my believing you might be thought a matter of my believing, is, perhaps, that what you say is true, and your belief in free markets or God, is a matter of your believing that free-market economies are desirable or that God exists.

Some philosophers have followed St., Thomas Aquinas (1225-74), in supposing that to believer in God is simply to believer that certain truths hold while others argue that belief-in is a distinctive attitude, on that includes essentially an element of trust. More commonly, belief-in has been taken to involve a combination of propositional belief together with some further attitude.

The moral philosopher Richard Price (1723-91) defends the claim that there are different sorts of belief-in, some, but not all reducible to beliefs-that. If you believer in God, you believer that God exists, that God is good, you believer that God is good, etc. But according to Price, your belief involves, in addition, a certain complex pro-attitude toward its object. Even so, belief-in outruns the evidence for the corresponding belief-that. Does this diminish its rationality? If belief-in presupposes believes-that, it might be thought that the evidential standards for the former must be, at least, as high as standards for the latter. And any additional pro-attitude might be thought to require a further layer of justification not required for cases of belief-that.

Belief-in may be, in general, less susceptible to alternations in the face of unfavourable evidence than belief-that. A believe who encounters evidence against God's existence may remain unshaken in his belief, in part because the evidence does not bear on his pro-attitude. So long as this is united with his belief that God exists, the reasonably so in a way that an ordinary propositional belief that would not.

Some philosophers think that the category of knowing for which true. Justified believing (accepting) is a requirement constituting only a species of Propositional knowledge, construed as an even broader category. They have proposed various examples of 'PK' that do not satisfy the belief and/ort justification conditions of the tripartite analysis. Such cases are often recognized by analyses of Propositional knowledge in terms of powers, capacities, or abilities. For instance, Alan R. White (1982) treats 'PK' as merely the ability to provide a correct answer to a possible question, however, White may be equating 'producing' knowledge in the sense of producing 'the correct answer to a possible question' with 'displaying' knowledge in the sense of manifesting knowledge. (White, 1982). The latter can be done even by very young children and some non-human animals independently of their being asked questions, understanding questions, or recognizing answers to questions. Indeed, an example that has been proposed as an instance of knowing that 'h' without believing or accepting that 'h' can be modified so as to illustrate this point. Two examples concern an imaginary person who has no special training or information about horses or racing, but who in an experiment persistently and correctly picks the winners of upcoming horseraces. If the example is modified so that the hypothetical 'seer' never picks winners but only muses over whether those horses wight win, or only reports those horses winning, this behaviour should be as much of a candidate for the person's manifesting knowledge that the horse in question will win as would be the behaviour of picking it as a winner.

These considerations expose limitations in Edward Craig's analysis (1990) of the concept of knowing of a person's being a satisfactory information in relation to an inquirer who wants to find out whether or not 'h'. Craig realizes that counterexamples to his analysis appear to be constituted by Knower who is too recalcitrant to inform the inquirer, or to incapacitate to inform, or too discredited to be worth considering (as with the boy who cried 'Wolf'). Craig admits that this might make preferably some alternative view of knowledge as a different state that helps to explain the presence of the state of being a suitable informant when the latter does obtain. Such an alternate, which offers a recursive definition that concerns one's having the power to proceed in a way representing the state of affairs, causally involved in one's proceeding in this way. When combined with a suitable analysis of representing, this theory of propositional knowledge can be unified with a structurally similar analysis of knowing how to do something.

Knowledge and belief, according to most epistemologists, knowledge entails belief, so that I cannot know that such and such is the case unless I believe that such and such am the case. Others think this entailment thesis can be rendered more accurately if we substitute for belief some closely related attitude. For instance, several philosophers would prefer to say that knowledge entail psychological certainties (Prichard, 1950 and Ayer, 1956) or conviction (Lehrer, 1974) or acceptance (Lehrer, 1989). Nonetheless, there are arguments against all versions of the thesis that knowledge requires having a belief-like attitude toward the known. These arguments are given by philosophers who think that knowledge and belief (or a facsimile) are mutually incompatible (the incomparability thesis), or by ones who say that knowledge does not entail belief, or vice versa, so that each may exist without the other, but the two may also coexist (the separability thesis).

The incompatibility thesis is sometimes traced to Plato 429-347 Bc. , In view of his claim that knowledge is infallible while belief or opinion is fallible ('Republic' 476-9). But this claim would not support the thesis. Belief might be a component of an infallible form of knowledge in spite of the fallibility of belief. Perhaps, knowledge involves some factor that compensates for the fallibility of belief.

A. Duncan-Jones (1939, also Vendler, 1978) cites linguistic evidence to back up the incompatibility thesis. He notes that people often say 'I do not believe she is guilty. I know she is' and the like, which suggest that belief rule out knowledge. However, as Lehrer (1974) indicates, the above exclamation is only a more emphatic way of saying 'I do not just believe she is guilty, I know she is' where 'just' makes it especially clear that the speaker is signalling that she has something more salient than mere belief, not that she has something inconsistent with belief, namely knowledge. Compare: 'You do not hurt him, you killed him'.

H.A. Prichard (1966) offers a defence of the incompatibility thesis that hinges on the equation of knowledge with certainty (both infallibility and psychological certitude) and the assumption that when we believe in the truth of a claim we are not certain about its truth. Given that belief always involves uncertainty while knowledge never dies, believing something rules out the possibility of knowing it. Unfortunately, however, Prichard gives 'us' no goods reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, to suggest that we cease to believe things about which we are completely confident is bizarre.

A.D. Woozley (1953) defends a version of the separability thesis. Woozley's version, which deals with psychological certainty rather than belief per se, is that knowledge can exist in the absence of confidence about the item known, although might also be accompanied by confidence as well. Woozley remarks that the test of whether I know something is 'what I can do, where what I can do may include answering questions'. On the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people who give correct responses on examinations even if those people show no confidence in their answers. Woozley acknowledges, however, that it would be odd for those who lack confidence to claim knowledge. It would be peculiar to say, 'I am unsure my answer is true: Still, I know it is correct'. But this tension Woozley explains using a distinction between conditions under which we are justified in making a claim (such as a claim to know something), and conditions under which the claim we make are true. While 'I know such and such' might be true even if I am unsure whether such and such holds, nonetheless it would be inappropriate for me to claim that I know that such and such unless I were sure of the truth of my claim.

Colin Radford (1966) extends Woozley's defence of the separability thesis. In Radford's view, not only is knowledge compatible with the lack of certainty, it is also compatible with a complete lack of belief. He argues by example. In one example, Jean has forgotten that he learned some English history year's priori and yet he is able to give several correct responses to questions such as 'When did the Battle of Hastings occur'? Since he forgot that he took history, he considers the correct response to be no more than guesses. Thus, when he says that the Battle of Hastings took place in 1066 he would deny having the belief that the Battle of Hastings took place in 1066. A disposition he would deny being responsible (or having the right to be convincing) that 1066 was the correct date. Radford would nonetheless insist that Jean know when the Battle occurred, since clearly be remembering the correct date. Radford admits that it would be inappropriate for Jean to say that he knew when the Battle of Hastings occurred, but, like Woozley he attributes the impropriety to a fact about when it is and is not appropriate to claim knowledge. When we claim knowledge, we ought, at least to believe that we have the knowledge we claim, or else our behaviour is 'intentionally misleading'.

Those that agree with Radford's defence of the separability thesis will probably think of belief as an inner state that can be detected through introspection. That Jean lack's beliefs about English history are plausible on this Cartesian picture since Jean does not find himself with any beliefs about English history when we seek them out. One might criticize Radford, however, by rejecting that Cartesian view of belief. One could argue that some beliefs are thoroughly unconscious, for example. Or one could adopt a behaviourist conception of belief, such as Alexander Bain's (1859), according to which having beliefs is a matter of the way people are disposed to behave (and has not Radford already adopted a behaviourist conception of knowledge?) Since Jean gives the correct response when queried, a form of verbal behaviour, a behaviourist would be tempted to credit him with the belief that the Battle of Hastings occurred in 1066.

D.M. Armstrong (1873) takes a different tack against Radford. Jean does know that the Battle of Hastings took place in 1066. Armstrong will grant Radford that point, in fact, Armstrong suggests that Jean believe that 1066 is not the date the Battle of Hastings occurred, for Armstrong equates the belief that such and such is just possible but no more than just possible with the belief that such and such is not the case. However, Armstrong insists, Jean also believes that the Battle did occur in 1066. After all, had Jean been mistaught that the Battle occurred in 1066, and subsequently 'guessed' that it took place in 1066, we would surely describe the situation as one in which Jean's false belief about the Battle became unconscious over time but persisted of a memory trace that was causally responsible for his guess. Out of consistency, we must describe Radford's original case as one that Jean's true belief became unconscious but persisted long enough to cause his guess. Thus, while Jean consciously believes that the Battle did not occur in 1066, unconsciously he does believe it occurred in 1066. So after all, Radford does not have a counterexample to the claim that knowledge entails belief.

Armstrong's response to Radford was to reject Radford's claim that the examinee lacked the relevant belief about English history. Another response is to argue that the examinee lacks the knowledge Radford attributes to him (cf. Sorenson, 1982). If Armstrong is correct in suggesting that Jean believes both that 1066 is and that it is not the date of the Battle of Hastings, one might deny Jean knowledge on the grounds that people who believe the denial of what they believe cannot be said t know the truth of their belief. Another strategy might be to compare the examine case with examples of ignorance given in recent attacks on externalist accounts of knowledge (needless to say. Externalists themselves would tend not to favour this strategy). Consider the following case developed by BonJour (1985): For no apparent reason, Samantha believes that she is clairvoyant. Again, for no apparent reason, she one day comes to believe that the President is in New York City, even though she has every reason to believe that the President is in Washington, D.C. In fact, Samantha is a completely reliable clairvoyant, and she has arrived at her belief about the whereabouts of the President thorough the power of her clairvoyance. Yet surely Samantha's belief is completely irrational. She is not justified in thinking what she does. If so, then she does not know where the President is. But Radford's examinee is unconventional. Even if Jean lacks the belief that Radford denies him, Radford does not have an example of knowledge that is unattended with belief. Suppose that Jean's memory had been sufficiently powerful to produce the relevant belief. As Radford says, in having every reason to suppose that his response is mere guesswork, and he has every reason to consider his belief false. His belief would be an irrational one, and hence one about whose truth Jean would be ignorant.

Least has been of mention to an approaching view from which 'perception' basis upon itself as a fundamental philosophical topic both for its central place in ant theory of knowledge, and its central place un any theory of consciousness. Philosophy in this area is constrained by a number of properties that we believe to hold of perception, (1) It gives 'us' knowledge of the world around 'us'. (2) We are conscious of that world by being aware of 'sensible qualities': Colour, sounds, tastes, smells, felt warmth, and the shapes and positions of objects in the environment. (3) Such consciousness is affected through highly complex information channels, such as the output of the three different types of colour-sensitive cells in the eye, or the channels in the ear for interpreting pulses of air pressure as frequencies of sound. (4) There ensues even more complex neurophysiological coding of that information, and eventually higher-order brain functions bring it about that we interpreted the information so received. (Much of this complexity has been revealed by the difficulties of writing programs enabling computers to recognize quite simple aspects of the visual scene.) The problem is to avoid thinking of here being a central, ghostly, conscious self, fed information in the same way that a screen if fed information by a remote television camera. Once such a model is in place, experience will seem like a veil getting between 'us' and the world, and the direct objects of perception will seem to be private items in an inner theatre or sensorium. The difficulty of avoiding this model is epically cute when we considered the secondary qualities of colour, sound, tactile feelings and taste, which can easily seem to have a purely private existence inside the perceiver, like sensation of pain. Calling such supposed items names like 'sense-data' or 'percepts' exacerbate the tendency, but once the model is in place, the first property, that perception gives 'us' knowledge of the world and its surrounding surfaces, is quickly threatened, for there will now seem little connection between these items in immediate experience and any independent reality. Reactions to this problem include 'scepticism' and 'idealism'.

A more hopeful approach is to claim that the complexities of (3) and (4) explain how we can have direct acquaintance of the world, than suggesting that the acquaintance we do have been at best indirect. It is pointed out that perceptions are not like sensation, precisely because they have a content, or outer-directed nature. To have a perception is to be aware of the world for being such-and-such a way, than to enjoy a mere modification of sensation. But such direct realism has to be sustained in the face of the evident personal (neurophysiological and other) factors determining how we perceive. One approach is to ask why it is useful to be conscious of what we perceive, when other aspects of our functioning work with information determining responses without any conscious awareness or intervention. A solution to this problem would offer the hope of making consciousness part of the natural world, than a strange optional extra.

Furthering, perceptual knowledge is knowledge acquired by or through the senses and includes most of what we know. We cross intersections when we see the light turn green, head for the kitchen when we smell the roast burning, squeeze the fruit to determine its ripeness, and climb out of bed when we hear the alarm ring. In each case we come to know something-that the light has turned green, that the roast is burning, that the melon is overripely, and that it is time to get up-by some sensory means. Seeing that the light has turned green is learning something-that, the light has turned green-by use of the eyes. Feeling that the melon is overripe is coming to know a fact-that the melon is overripe-by one's sense to touch. In each case the resulting knowledge is somehow based on, derived from or grounded in the sort of experience that characterizes the sense modality in question.

Much of our perceptual knowledge is indirect, dependent or derived. By this I mean that the facts we describe ourselves as learning, as coming to know, by perceptual means are pieces of knowledge that depend on our coming to know something else, some other fact, in a more direct way. We see, by the gauge, that we need gas, see, by the newspapers, that our team has lost again, see, by her expression, that she is nervous. This derived or dependent sort of knowledge is particularly prevalent in the cases of vision, but it occurs, to a lesser degree, in every sense modality. We install bells and other noise-makers so that we calm for example, hear (by the bell) that someone is at the door and (by the alarm) that its time to get up. When we obtain knowledge in this way, it is clear that unless one sees-hence, comes to know something about the gauge (that it says) and (hence, know) that one is described as coming to know by perceptual means. If one cannot hear that the bell is ringing, one cannot-in at least in this way-hear that one's visitors have arrived. In such cases one sees (hears, smells, etc.) that 'a' is 'F', coming to know thereby that 'a' is 'F', by seeing (hearing, etc.) that some other condition, 'b's' being 'G', obtains when this occurs, the knowledge (that 'a' is 'F') is derived from, or dependent on, the more basic perceptual knowledge that 'b' is 'G'.

Perhaps as a better strategy is to tie an account save that part that evidence could justify explanation for it is its truth alone. Since, at least the times of Aristotle philosophers of explanatory knowledge have emphasized of its importance that, in its simplest therms, we want to know not only what is the composite peculiarities and particular points of issue but also why it is. This consideration suggests that we define an explanation as an answer to a why-question. Such a definition would, however, be too broad, because some why-questions are requests for consolation (Why did my son have to die?) Or moral justification (Why should women not be paid the same as men for the same work?) It would also be too narrow because some explanations are responses to how-questions (How does radar work?) Or how-possibility-questions (How is it possible for cats always to land their feet?)

In its overall sense, 'to explain' means to make clear, to make plain, or to provide understanding. Definitions of this sort are philosophically unhelpful, for the terms used in the deficient are no less problematic than the term to be defined. Moreover, since a wide variety of things require explanation, and since many different types of explanation exist, as more complex explanation is required. To facilitate the requirement leaves, least of mention, for us to consider by introduction a bit of technical terminology. The term 'explanation' is used to refer to that which is to be explained: The term 'explanans' refer to that which does the explaining, the explanans and the explanation taken together constitute the explanation.

One common type of explanation occurs when deliberate human actions are explained in terms of conscious purposes. 'Why did you go to the pharmacy yesterday?' 'Because I had a headache and needed to get some aspirin.' It is tacitly assumed that aspirin is an appropriate medication for headaches and that going to the pharmacy would be an efficient way of getting some. Such explanations are, of course, teleological, referring, ss they do, to goals. The explanans are not the realisation of a future goal - if the pharmacy happened to be closed for stocktaking the aspirin would have been obtained there, bu t that would not invalidate the explanation. Some philosophers would say that the antecedent desire to achieve the end is what doers the explaining: Others might say that the explaining is done by the nature of the goal and the fact that the action promoted the chances of realizing it. (Taylor, 1964). In that it should not be automatically being assumed that such explanations are causal. Philosophers differ considerably on whether these explanations are to be framed in terms of cause or reason, but the distinction cannot be used to show that the relation between reasons and the actions they justify is in no way causal, and there are many differing analyses of such concepts as intention and agency. Expanding the domain beyond consciousness, Freud maintained, in addition, that much human behaviour can be explained in terms of unconscious and conscious wishes. Those Freudian explanations should probably be construed as basically causal.

Problems arise when teleological explanations are offered in other context. The behaviour of non-human animals is often explained in terms of purpose, e.g., the mouse ran to escape from the cat. In such cases the existence of conscious purpose seems dubious. The situation is still more problematic when a supr-empirical purpose in invoked, e.g., the explanations of living species in terms of God's purpose, or the vitalistic explanations of biological phenomena in terms of a entelechy or vital principle. In recent years an 'anthropic principle' has received attention in cosmology (Barrow and Tipler, 1986). All such explanations have been condemned by many philosophers an anthropomorphic.

Nevertheless, philosophers and scientists often maintain that functional explanations play an important an legitimate role in various sciences such as, evolutionary biology, anthropology and sociology. For example, of the peppered moth in Liverpool, the change in colour from the light phase to the dark phase and back again to the light phase provided adaption to a changing environment and fulfilled the function of reducing predation on the spacies. In the study of primitive soviets anthropologists have insisted that various rituals the (rain dance) which may be inefficacious in braining about their manifest goals (producing rain), actually cohesion at a period of stress (often a drought). Philosophers who admit teleological and/or functional explanations in common sense and science oftentimes take pans to argue that such explanations can be annualized entirely in terms of efficient causes, thereby escaping the charge of anthropomorphism (Wright, 1976): Again, however, not all philosophers agree.

Causal theories of Propositional knowledge differ over whether they deviate from the tripartite analysis by dropping the requirements that one's believing (accepting) that 'h' be justified. The same variation occurs regarding reliability theories, which present the Knower as reliable concerning the issue of whether or not 'h', in the sense that some of one's cognitive or epistemic states, ?, are such that, given further characteristics of oneself-possibly including relations to factors external to one and which one may not be aware-it is nomologically necessary (or at least probable) that 'h'. In some versions, the reliability is required to be 'global' in as far as it must concern a nomologically (probabilistic-relationship) relationship that states of type ? to the acquisition of true beliefs about a wider range of issues than merely whether or not 'h'. There is also controversy about how to delineate the limits of what constitutes a type of relevant personal state or characteristic. (For example, in a case where Mr Notgot has not been shamming and one does know thereby that someone in the office owns a Ford, such as a way of forming beliefs about the properties of persons spatially close to one, or instead something narrower, such as a way of forming beliefs about Ford owners in offices partly upon the basis of their relevant testimony?)

One important variety of reliability theory is a conclusive reason account, which includes a requirement that one's reasons for believing that 'h' be such that in one's circumstances, if h* were not to occur then, e.g., one would not have the reasons one does for believing that 'h', or, e.g., one would not believe that 'h'. Roughly, the latter is demanded by theories that treat a Knower as 'tracking the truth', theories that include the further demand that is roughly, if it were the case, that 'h', then one would believe that 'h'. A version of the tracking theory has been defended by Robert Nozick (1981), who adds that if what he calls a 'method' has been used to arrive at the belief that 'h', then the antecedent clauses of the two conditionals that characterize tracking will need to include the hypothesis that one would employ the very same method.

But unless more conditions are added to Nozick's analysis, it will be too weak to explain why one lack's knowledge in a version of the last variant of the tricky Mr Notgot case described above, where we add the following details: (a) Mr Notgot's compulsion is not easily changed, (b) while in the office, Mr Notgot has no other easy trick of the relevant type to play on one, and finally for one's belief that 'h', not by reasoning through a false belief ut by basing belief that 'h', upon a true existential generalization of one's evidence.

Nozick's analysis is in addition too strong to permit anyone ever to know that 'h': 'Some of my beliefs about beliefs might be otherwise, e.g., I might have rejected on of them'. If I know that 'h5' then satisfaction of the antecedent of one of Nozick's conditionals would involve its being false that 'h5', thereby thwarting satisfaction of the consequent's requirement that I not then believe that 'h5'. For the belief that 'h5' is itself one of my beliefs about beliefs (Shope, 1984).

Some philosophers think that the category of knowing for which is true. Justified believing (accepting) is a requirement constituting only a species of Propositional knowledge, construed as an even broader category. They have proposed various examples of 'PK' that do not satisfy the belief and/ort justification conditions of the tripartite analysis. Such cases are often recognized by analyses of Propositional knowledge in terms of powers, capacities, or abilities. For instance, Alan R. White (1982) treats 'PK' as merely the ability to provide a correct answer to a possible question. White may be equating 'producing' knowledge in the sense of producing 'the correct answer to a possible question' with 'displaying' knowledge in the sense of manifesting knowledge. (White, 1982). The latter can be done even by very young children and some non-human animals independently of their being asked questions, understanding questions, or recognizing answers to questions. Indeed, an example that has been proposed as an instance of knowing that 'h' without believing or accepting that 'h' can be modified so as to illustrate this point. Two examples concerns an imaginary person who has no special training or information about horses or racing, but who in an experiment persistently and correctly picks the winners of upcoming horseraces. If the example is modified so that the hypothetical 'seer' never picks winners but only muses over whether those horses wight win, or only reports those horses winning, this behaviour should be as much of a candidate for the person's manifesting knowledge that the horse in question will win as would be the behaviour of picking it as a winner.

These considerations expose limitations in Edward Craig's analysis (1990) of the concept of knowing of a person's being a satisfactory informants in relation to an inquirer who wants to find out whether or not 'h'. Craig realizes that counterexamples to his analysis appear to be constituted by Knower who are too recalcitrant to inform the inquirer, or too incapacitate to inform, or too discredited to be worth considering (as with the boy who cried 'Wolf'). Craig admits that this might make preferable some alternative view of knowledge as a different state that helps to explain the presence of the state of being a suitable informant when the latter does obtain. Such the alternate, which offers a recursive definition that concerns one's having the power to proceed in a way representing the state of affairs, causally involved in one's proceeding in this way. When combined with a suitable analysis of representing, this theory of propositional knowledge can be unified with a structurally similar analysis of knowing how to do something.

Knowledge and belief, according to most epistemologists, knowledge entails belief, so that I cannot know that such and such is the case unless I believe that such and such is the case. Others think this entailment thesis can be rendered more accurately if we substitute for belief some closely related attitude. For instance, several philosophers would prefer to say that knowledge entail psychological certainties (Prichard, 1950 and Ayer, 1956) or conviction (Lehrer, 1974) or acceptance (Lehrer, 1989). Nonetheless, there are arguments against all versions of the thesis that knowledge requires having a belief-like attitude toward the known. These arguments are given by philosophers who think that knowledge and belief (or a facsimile) are mutually incompatible (the incomparability thesis), or by ones who say that knowledge does not entail belief, or vice versa, so that each may exist without the other, but the two may also coexist (the separability thesis).

The incompatibility thesis is sometimes traced to Plato (429-347 Bc) in view of his claim that knowledge is infallible while belief or opinion is fallible ('Republic' 476-9). But this claim would not support the thesis. Belief might be a component of an infallible form of knowledge in spite of the fallibility of belief. Perhaps, knowledge involves some factor that compensates for the fallibility of belief.

A. Duncan-Jones (1939: Also Vendler, 1978) cite linguistic evidence to back up the incompatibility thesis. He notes that people often say 'I do not believe she is guilty. I know she is' and the like, which suggest that belief rule out knowledge. However, as Lehrer (1974) indicates, the above exclamation is only a more emphatic way of saying 'I do not just believe she is guilty, I know she is' where 'just' makes it especially clear that the speaker is signalling that she has something more salient than mere belief, not that she has something inconsistent with belief, namely knowledge. Compare: 'You do not hurt him, you killed him.'

H.A. Prichard (1966) offers a defence of the incompatibility thesis that hinges on the equation of knowledge with certainty (both infallibility and psychological certitude) and the assumption that when we believe in the truth of a claim we are not certain about its truth. Given that belief always involves uncertainty while knowledge never dies, believing something rules out the possibility of knowing it. Unfortunately, however, Prichard gives 'us' no goods reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, to suggest that we cease to believe things about which we are completely confident is bizarre.

A.D. Woozley (1953) defends a version of the separability thesis. Woozley's version, which deals with psychological certainty rather than belief per se, is that knowledge can exist in the absence of confidence about the item known, although might also be accompanied by confidence as well. Woozley remarks that the test of whether I know something is 'what I can do, where what I can do may include answering questions.' On the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people who give correct responses on examinations even if those people show no confidence in their answers. Woozley acknowledges, however, that it would be odd for those who lack confidence to claim knowledge. It would be peculiar to say, I am unsure that for whatever reason my answer is true: Still, I know it is correct But this tension Woozley explains using a distinction between conditions under which we are justified in making a claim (such as a claim to know something), and conditions under which the claim we make is true. While 'I know such and such' might be true even if I am unsure whether such and such holds, nonetheless it would be inappropriate for me to claim that I know that such and such unless I were sure of the truth of my claim.

Colin Radford (1966) extends Woozley's defence of the separability thesis. In Radford's view, not only is knowledge compatible with the lack of certainty, it is also compatible with a complete lack of belief. He argues by example. In one example, Jean has forgotten that he learned some English history year's priori and yet he is able to give several correct responses to questions such as 'When did the Battle of Hastings occur?' Since he forgot that he took history, he considers the correct response to be no more than guesses. Thus, when he says that the Battle of Hastings took place in 1066 he would deny having the belief that the Battle of Hastings took place in 1066. A disposition he would deny being responsible (or having the right to be convincing) that 1066 was the correct date. Radford would none the less insist that Jean know when the Battle occurred, since clearly be remembering the correct date. Radford admits that it would be inappropriate for Jean to say that he knew when the Battle of Hastings occurred, but, like Woozley he attributes the impropriety to a fact about when it is and is not appropriate to claim knowledge. When we claim knowledge, we ought, at least to believe that we have the knowledge we claim, or else our behaviour is 'intentionally misleading'.

Those that agree with Radford's defence of the separability thesis will probably think of belief as an inner state that can be detected through introspection. That Jean lack's beliefs about English history is plausible on this Cartesian picture since Jean does not find himself with any beliefs about English history when ne seek them out. One might criticize Radford, however, by rejecting that Cartesian view of belief. One could argue that some beliefs are thoroughly unconscious, for example. Or one could adopt a behaviourist conception of belief, such as Alexander Bain's (1859), according to which having beliefs is a matter of the way people are disposed to behave (and has not Radford already adopted a behaviourist conception of knowledge?) Since Jean gives the correct response when queried, a form of verbal behaviour, a behaviourist would be tempted to credit him with the belief that the Battle of Hastings occurred in 1066.

D.M. Armstrong (1873) takes a different tack against Radford. Jean does know that the Battle of Hastings took place in 1066. Armstrong will grant Radford that point, in fact, Armstrong suggests that Jean believe that 1066 is not the date the Battle of Hastings occurred, for Armstrong equates the belief that such and such is just possible but no more than just possible with the belief that such and such is not the case. However, Armstrong insists, Jean also believes that the Battle did occur in 1066. After all, had Jean been mistaught that the Battle occurred in 1066, and subsequently 'guessed' that it took place in 1066, we would surely describe the situation as one in which Jean's false belief about the Battle became unconscious over time but persisted of a memory trace that was causally responsible for his guess. Out of consistency, we must describe Radford's original case as one that Jean's true belief became unconscious but persisted long enough to cause his guess. Thus, while Jean consciously believes that the Battle did not occur in 1066, unconsciously he does believe it occurred in 1066. So after all, Radford does not have a counterexample to the claim that knowledge entails belief.

Armstrong's response to Radford was to reject Radford's claim that the examinee lacked the relevant belief about English history. Another response is to argue that the examinee lacks the knowledge Radford attributes to him (cf. Sorenson, 1982). If Armstrong is correct in suggesting that Jean believes both that 1066 is and that it is not the date of the Battle of Hastings, one might deny Jean knowledge on the grounds that people who believe the denial of what they believe cannot be said t know the truth of their belief. Another strategy might be to compare the examine case with examples of ignorance given in recent attacks on externalist accounts of knowledge (needless to say. Externalists themselves would tend not to favour this strategy). Consider the following case developed by BonJour (1985): For no apparent reason, Samantha believes that she is clairvoyant. Again, for no apparent reason, she one day comes to believe that the President is in New York City, even though she has every reason to believe that the President is in Washington, D.C. In fact, Samantha is a completely reliable clairvoyant, and she has arrived at her belief about the whereabouts of the President thorough the power of her clairvoyance. Yet surely Samanthas belief is completely irrational. She is not justified in thinking what she does. If so, then she does not know where the President is. But Radford's examinee is unconventional. Even if Jean lacks the belief that Radford denies him, Radford does not have an example of knowledge that is unattended with belief. Suppose that Jean's memory had been sufficiently powerful to produce the relevant belief. As Radford says, in having every reason to suppose that his response is mere guesswork, and he has every reason to consider his belief false. His belief would be an irrational one, and hence one about whose truth Jean would be ignorant.

Least has been of mention to an approaching view from which 'perception' basis upon itself as a fundamental philosophical topic both for its central place in ant theory of knowledge, and its central place un any theory of consciousness. Philosophy in this area is constrained by a number of properties that we believe to hold of perception, (1) It gives 'us' knowledge of the world around 'us,' (2) We are conscious of that world by being aware of 'sensible qualities': Colour, sounds, tastes, smells, felt warmth, and the shapes and positions of objects in the environment. (3) Such consciousness is effected through highly complex information channels, such as the output of the three different types of colour-sensitive cells in the eye, or the channels in the ear for interpreting pulses of air pressure as frequencies of sound. (4) There ensues even more complex neurophysiological coding of that information, and eventually higher-order brain functions bring it about that we interpreted the information so received. (Much of this complexity has been revealed by the difficulties of writing programs enabling computers to recognize quite simple aspects of the visual scene.) The problem is to avoid thinking of here being a central, ghostly, conscious self, fed information in the same way that a screen if fed information by a remote television camera. Once such a model is in place, experience will seem like a veil getting between 'us' and the world, and the direct objects of perception will seem to be private items in an inner theatre or sensorium. The difficulty of avoiding this model is epically cute when we considered the secondary qualities of colour, sound, tactile feelings and taste, which can easily seem to have a purely private existence inside the perceiver, like sensation of pain. Calling such supposed items names like 'sense-data' or 'percepts' exacerbates the tendency, but once the model is in place, the first property, that perception gives 'us' knowledge of the world and its surrounding surfaces, is quickly threatened, for there will now seem little connection between these items in immediate experience and any independent reality. Reactions to this problem include 'scepticism' and 'idealism.'

A more hopeful approach is to claim that the complexities of (3) and (4) explain how we can have direct acquaintance of the world, than suggesting that the acquaintance we do have been at best indirect. It is pointed out that perceptions are not like sensation, precisely because they have a content, or outer-directed nature. To have a perception is to be aware of the world for being such-and-such a way, than to enjoy a mere modification of sensation. But such direct realism has to be sustained in the face of the evident personal (neurophysiological and other) factors determining how we perceive. One approach is to ask why it is useful to be conscious of what we perceive, when other aspects of our functioning work with information determining responses without any conscious awareness or intervention. A solution to this problem would offer the hope of making consciousness part of the natural world, than a strange optional extra.

Furthering, perceptual knowledge is knowledge acquired by or through the senses and includes most of what we know. We cross intersections when we see the light turn green, head for the kitchen when we smell the roast burning, squeeze the fruit to determine its ripeness, and climb out of bed when we hear the alarm ring. In each case we come to know something-that the light has turned green, that the roast is burning, that the melon is overripe, and that it is time to get up-by some sensory means. Seeing that the light has turned green is learning something-that, the light has turned green-by use of the eyes. Feeling that the melon is overripe is coming to know a fact-that the melon is overripe-by one's sense to touch. In each case the resulting knowledge is somehow based on, derived from or grounded in the sort of experience that characterizes the sense modality in question.

Much as much of our perceptual knowledge is indirect, dependent or derived. By this I mean that the facts we describe ourselves as learning, as coming to know, by perceptual means are pieces of knowledge that depend on our coming to know something else, some other fact, in a more direct way. We see, by the gauge, that we need gas, see, by the newspapers, that our team has lost again, see, by her expression, that she is nervous. This derived or dependent sort of knowledge is particularly prevalent in the cases of vision, but it occurs, to a lesser degree, in every sense modality. We install bells and other noise-makers so that we calm for example, hear (by the bell) that someone is at the door and (by the alarm) that its time to get up. When we obtain knowledge in this way, it is clear that unless one sees-hence, comes to know something about the gauge (that it says) and (hence, know) that one is described as coming to know by perceptual means. If one cannot hear that the bell is ringing, one cannot-in at least in this way-hear that one's visitors have arrived. In such cases one sees (hears, smells, etc.) that 'a' is 'F', coming to know thereby that 'a' is 'F', by seeing (hearing, etc.) that some other condition, 'b's' being 'G', obtains when this occurs, the knowledge (that 'a' is 'F') is derived from, or dependent on, the more basic perceptual knowledge that 'b' is 'G'.

The Representational Theory of Mind, defines such intentional mental states as relations to mental representations, and explains the intentionality of the former in terms of the semantic properties of the latter. For example, to believe that Elvis is dead is to be appropriately related to a mental representation whose propositional content is that Elvis is dead. (The desire that Elvis be dead, the fear that he is dead, the regret that he is dead, etc., involve different relations to the same mental representation.) To perceive a strawberry is to have a sensory experience of some kind which is appropriately related to (e.g., caused by) the strawberry Representational theory of mind also understands mental processes such as thinking, reasoning and imagining as sequences of intentional mental states. For example, to imagine the moon rising over a mountain is to entertain a series of mental images of the moon (and a mountain). To infer a proposition q from the proposition's p and if 'p' then 'q' is (among other things) to have a sequence of thoughts of the form 'p', 'if p' then 'q', 'q'.

Contemporary philosophers of mind have typically supposed (or at least hoped) that the mind can be naturalized -, i.e., that all mental facts have explanations in the terms of natural science. This assumption is shared within cognitive science, which attempts to provide accounts of mental states and processes in terms (ultimately) of features of the brain and central nervous system. In the course of doing so, the various sub-disciplines of cognitive science (including cognitive and computational psychology and cognitive and computational neuroscience) postulate a number of different kinds of structures and processes, many of which are not directly implicated by mental states and processes as commonsensical conceived. There remains, however, a shared commitment to the idea that mental states and processes are to be explained in terms of mental representations.

In philosophy, recent debates about mental representation have centred around the existence of propositional attitudes (beliefs, desires, etc.) and the determination of their contents (how they come to be about what they are about), and the existence of phenomenal properties and their relation to the content of thought and perceptual experience. Within cognitive science itself, the philosophically relevant debates have been focussed on the computational architecture of the brain and central nervous system, and the compatibility of scientific and commonsense accounts of mentality.

Intentional Realists such as Dretske (e.g., 1988) and Fodor (e.g., 1987) note that the generalizations we apply in everyday life in predicting and explaining each other's behaviour (often collectively referred to as 'folk psychology') are both remarkably successful and indispensable. What a person believes, doubts, desires, fears, etc. is a highly reliable indicator of what that person will do. We have no other way of making sense of each other's behaviour than by ascribing such states and applying the relevant generalizations. We are thus committed to the basic truth of commonsense psychology and, hence, to the existence of the states its generalizations refer to. (Some realists, such as Fodor, also hold that commonsense psychology will be vindicated by cognitive science, given that propositional attitudes can be construed as computational relations to mental representations.)

Intentional Eliminativists, such as Churchland, (perhaps) Dennett and (at one time) Stich argue that no such things as propositional attitudes (and their constituent representational states) are implicated by the successful explanation and prediction of our mental lives and behaviour. Churchland denies that the generalizations of commonsense propositional-attitude psychology are true. He (1981) argues that folk psychology is a theory of the mind with a long history of failure and decline, and that it resists incorporation into the framework of modern scientific theories (including cognitive psychology). As such, it is comparable to alchemy and phlogiston theory, and ought to suffer a comparable fate. Commonsense psychology is false, and the states (and representations) it postulates simply don't exist. (It should be noted that Churchland is not an eliminativist about mental representation tout court.

Dennett (1987) grants that the generalizations of commonsense psychology are true and indispensable, but denies that this is sufficient reason to believe in the entities they appear to refer to. He argues that to give an intentional explanation of a system's behaviour is merely to adopt the 'intentional stance' toward it. If the strategy of assigning contentful states to a system and predicting and explaining its behaviour (on the assumption that it is rational -, i.e., that it behaves as it should, given the propositional attitudes it should have in its environment) is successful, then the system is intentional, and the propositional-attitude generalizations we apply to it are true. But there is nothing more to having a propositional attitude than this.

Though he has been taken to be thus claiming that intentional explanations should be construed instrumentally, Dennett (1991) insists that he is a 'moderate' realist about propositional attitudes, since he believes that the patterns in the behaviour and behavioural dispositions of a system on the basis of which we (truly) attribute intentional states to it are objectively real. In the event that there are two or more explanatorily adequate but substantially different systems of intentional ascriptions to an individual, however, Dennett claims there is no fact of the matter about what the system believes (1987, 1991). This does suggest an irrealism at least with respect to the sorts of things Fodor and Dretske take beliefs to be; though it is not the view that there is simply nothing in the world that makes intentional explanations true.

(Davidson 1973, 1974 and Lewis 1974 also defend the view that what it is to have a propositional attitude is just to be interpretable in a particular way. It is, however, not entirely clear whether they intend their views to imply irrealism about propositional attitudes.). Stich (1983) argues that cognitive psychology does not (or, in any case, should not) taxonomize mental states by their semantic properties at all, since attribution of psychological states by content is sensitive to factors that render it problematic in the context of a scientific psychology. Cognitive psychology seeks causal explanations of behaviour and cognition, and the causal powers of a mental state are determined by its intrinsic 'structural' or 'syntactic' properties. The semantic properties of a mental state, however, are determined by its extrinsic properties -, e.g., its history, environmental or intra-mental relations. Hence, such properties cannot figure in causal-scientific explanations of behaviour. (Fodor 1994 and Dretske 1988 are realist attempts to come to grips with some of these problems.) Stich proposes a syntactic theory of the mind, on which the semantic properties of mental states play no explanatory role.

It is a traditional assumption among realists about mental representations that representational states come in two basic varieties (Boghossian 1995). There are those, such as thoughts, which are composed of concepts and have no phenomenal ('what-it's-like') features ('Qualia'), and those, such as sensory experiences, which have phenomenal features but no conceptual constituents. (Non-conceptual content is usually defined as a kind of content that states of a creature lacking concepts but, nonetheless enjoy. On this taxonomy, mental states can represent either in a way analogous to expressions of natural languages or in a way analogous to drawings, paintings, maps or photographs. (Perceptual states such as seeing that something is blue, are sometimes thought of as hybrid states, consisting of, for example, a Non-conceptual sensory experience and a thought, or some more integrated compound of sensory and conceptual components.)

Some historical discussions of the representational properties of mind (e.g., Aristotle 1984, Locke 1689/1975, Hume 1739/1978) seem to assume that Non-conceptual representations - percepts ('impressions'), images ('ideas') and the like - are the only kinds of mental representations, and that the mind represents the world in virtue of being in states that resemble things in it. On such a view, all representational states have their content in virtue of their phenomenal features. Powerful arguments, however, focussing on the lack of generality (Berkeley 1975), ambiguity (Wittgenstein 1953) and non-compositionality (Fodor 1981) of sensory and imaginistic representations, as well as their unsuitability to function as logical (Frége 1918/1997, Geach 1957) or mathematical (Frége 1884/1953) concepts, and the symmetry of resemblance (Goodman 1976), convinced philosophers that no theory of mind can get by with only Non-conceptual representations construed in this way.

Contemporary disagreement over Non-conceptual representation concerns the existence and nature of phenomenal properties and the role they play in determining the content of sensory experience. Dennett (1988), for example, denies that there are such things as Qualia at all; while Brandom (2002), McDowell (1994), Rey (1991) and Sellars (1956) deny that they are needed to explain the content of sensory experience. Among those who accept that experiences have phenomenal content, some (Dretske, Lycan, Tye) argue that it is reducible to a kind of intentional content, while others (Block, Loar, Peacocke) argue that it is irreducible.

The representationalist thesis is often formulated as the claim that phenomenal properties are representational or intentional. However, this formulation is ambiguous between a reductive and a non-deductive claim (though the term 'representationalism' is most often used for the reductive claim). On one hand, it could mean that the phenomenal content of an experience is a kind of intentional content (the properties it represents). On the other, it could mean that the (irreducible) phenomenal properties of an experience determine an intentional content. Representationalists such as Dretske, Lycan and Tye would assent to the former claim, whereas phenomenalists such as Block, Chalmers, Loar and Peacocke would assent to the latter. (Among phenomenalists, there is further disagreement about whether Qualia are intrinsically representational (Loar) or not (Block, Peacocke).

Most (reductive) representationalists are motivated by the conviction that one or another naturalistic explanation of intentionality is, in broad outline, correct, and by the desire to complete the naturalization of the mental by applying such theories to the problem of phenomenality. (Needless to say, most phenomenalists (Chalmers is the major exception) are just as eager to naturalize the phenomenal - though not in the same way.)

The main argument for representationalism appeals to the transparency of experience. The properties that characterize what it's like to have a perceptual experience are presented in experience as properties of objects perceived: in attending to an experience, one seems to 'see through it' to the objects and properties it is experiences of. They are not presented as properties of the experience itself. If nonetheless they were properties of the experience, perception would be massively deceptive. But perception is not massively deceptive. According to the representationalist, the phenomenal character of an experience is due to its representing objective, non-experiential properties. (In veridical perception, these properties are locally instantiated; in illusion and hallucination, they are not.) On this view, introspection is indirect perception: one comes to know what phenomenal features one's experience has by coming to know what objective features it represents.

In order to account for the intuitive differences between conceptual and sensory representations, representationalists appeal to their structural or functional differences. Dretske (1995), for example, distinguishes experiences and thoughts on the basis of the origin and nature of their functions: an experience of a property 'P' is a state of a system whose evolved function is to indicate the presence of 'P' in the environment; a thought representing the property 'P', on the other hand, is a state of a system whose assigned (learned) function is to calibrate the output of the experiential system. Rey (1991) takes both thoughts and experiences to be relations to sentences in the language of thought, and distinguishes them on the basis of (the functional roles of) such sentences' constituent predicates. Lycan (1987, 1996) distinguishes them in terms of their functional-computational profiles. Tye (2000) distinguishes them in terms of their functional roles and the intrinsic structure of their vehicles: thoughts are representations in a language-like medium, whereas experiences are image-like representations consisting of 'symbol-filled arrays.' (The account of mental images in Tye 1991.)

Phenomenalists tend to make use of the same sorts of features (function, intrinsic structure) in explaining some of the intuitive differences between thoughts and experiences; but they do not suppose that such features exhaust the differences between phenomenal and non-phenomenal representations. For the phenomenalist, it is the phenomenal properties of experiences - Qualia themselves - that constitute the fundamental difference between experience and thought. Peacocke (1992), for example, develops the notion of a perceptual 'scenario' (an assignment of phenomenal properties to coordinates of a three-dimensional egocentric space), whose content is 'correct' (a semantic property) if in the corresponding 'scene' (the portion of the external world represented by the scenario) properties are distributed as their phenomenal analogues are in the scenario.

Another sort of representation championed by phenomenalists (e.g., Block, Chalmers (2003) and Loar (1996)) is the 'phenomenal concept' -, a conceptual/phenomenal hybrid consisting of a phenomenological 'sample' (an image or an occurrent sensation) integrated with (or functioning as) a conceptual component. Phenomenal concepts are postulated to account for the apparent fact (among others) that, as McGinn (1991) puts it, 'you cannot form [introspective] concepts of conscious properties unless you yourself instantiate those properties.' One cannot have a phenomenal concept of a phenomenal property 'P', and, hence, phenomenal beliefs about P, without having experience of 'P', because 'P' itself is (in some way) constitutive of the concept of 'P'. (Jackson 1982, 1986 and Nagel 1974.)

Though imagery has played an important role in the history of philosophy of mind, the important contemporary literature on it is primarily psychological. In a series of psychological experiments done in the 1970s (summarized in Kosslyn 1980 and Shepard and Cooper 1982), subjects' response time in tasks involving mental manipulation and examination of presented figures was found to vary in proportion to the spatial properties (size, orientation, etc.) of the figures presented. The question of how these experimental results are to be explained has kindled a lively debate on the nature of imagery and imagination.

Kosslyn (1980) claims that the results suggest that the tasks were accomplished via the examination and manipulation of mental representations that they have spatial properties, i.e., pictorial representations, or images. Others, principally Pylyshyn (1979, 1981, 2003), argue that the empirical facts can be explained in terms exclusively of discursive, or propositional representations and cognitive processes defined over them. (Pylyshyn takes such representations to be sentences in a language of thought.)

The idea that pictorial representations are literally pictures in the head is not taken seriously by proponents of the pictorial view of imagery. The claim is, rather, that mental images represent in a way that is relevantly like the way pictures represent. (Attention has been focussed on visual imagery - hence the designation 'pictorial'; Though of course, there may imagery in other modalities - auditory, olfactory, etc. - as well.)

The distinction between pictorial and discursive representation can be characterized in terms of the distinction between analog and digital representation (Goodman 1976). This distinction has itself been variously understood (Fodor & Pylyshyn 1981, Goodman 1976, Haugeland 1981, Lewis 1971, McGinn 1989), though a widely accepted construal is that analog representation is continuous (i.e., in virtue of continuously variable properties of the representation), while digital representation is discrete (i.e., in virtue of properties a representation either has or doesn't have) (Dretske 1981). (An analog/digital distinction may also be made with respect to cognitive processes. (Block 1983.)) On this understanding of the analog/digital distinction, imaginistic representations, which represent in virtue of properties that may vary continuously (such for being more or less bright, loud, vivid, etc.), would be analog, while conceptual representations, whose properties do not vary continuously (a thought cannot be more or less about Elvis: either it is or it is not) would be digital.

It might be supposed that the pictorial/discursive distinction is best made in terms of the phenomenal/nonphenomenal distinction, but it is not obvious that this is the case. For one thing, there may be nonphenomenal properties of representations that vary continuously. Moreover, there are ways of understanding pictorial representation that presuppose neither phenomenality nor analogicity. According to Kosslyn (1980, 1982, 1983), a mental representation is 'quasi-pictorial' when every part of the representation corresponds to a part of the object represented, and relative distances between parts of the object represented are preserved among the parts of the representation. But distances between parts of a representation can be defined functionally rather than spatially - for example, in terms of the number of discrete computational steps required to combine stored information about them. (Rey 1981.)

Tye (1991) proposes a view of images on which they are hybrid representations, consisting both of pictorial and discursive elements. On Tye's account, images are '(labelled) interpreted symbol-filled arrays.' The symbols represent discursively, while their arrangement in arrays has representational significance (the location of each 'cell' in the array represents a specific viewer-centred 2-D location on the surface of the imagined object)

The contents of mental representations are typically taken to be abstract objects (properties, relations, propositions, sets, etc.). A pressing question, especially for the naturalist, is how mental representations come to have their contents. Here the issue is not how to naturalize content (abstract objects can't be naturalized), but, rather, how to provide a naturalistic account of the content-determining relations between mental representations and the abstract objects they express. There are two basic types of contemporary naturalistic theories of content-determination, causal-informational and functional.

Causal-informational theories hold that the content of a mental representation is grounded in the information it carries about what does (Devitt 1996) or would (Fodor 1987, 1990) cause it to occur. There is, however, widespread agreement that causal-informational relations are not sufficient to determine the content of mental representations. Such relations are common, but representation is not. Tree trunks, smoke, thermostats and ringing telephones carry information about what they are causally related to, but they do not represent (in the relevant sense) what they carry information about. Further, a representation can be caused by something it does not represent, and can represent something that has not caused it.

The main attempts to specify what makes a causal-informational state a mental representation are Asymmetric Dependency Theories, the Asymmetric Dependency Theory distinguishes merely informational relations from representational relations on the basis of their higher-order relations to each other: informational relations depend upon representational relations, but not vice-versa. For example, if tokens of a mental state type are reliably caused by horses, cows-on-dark-nights, zebras-in-the-mist and Great Danes, then they carry information about horses, etc. If, however, such tokens are caused by cows-on-dark-nights, etc. because they were caused by horses, but not vice versa, then they represent horses.

According to Teleological Theories, representational relations are those a representation-producing mechanism has the selected (by evolution or learning) function of establishing. For example, zebra-caused horse-representations do not mean zebra, because the mechanism by which such tokens are produced has the selected function of indicating horses, not zebras. The horse-representation-producing mechanism that responds to zebras is malfunctioning.

Functional theories, hold that the content of a mental representation are well grounded in causal computational inferential relations to other mental portrayals other than mental representations. They differ on whether relata should include all other mental representations or only some of them, and on whether to include external states of affairs. The view that the content of a mental representation is determined by its inferential/computational relations with all other representations is holism; the view it is determined by relations to only some other mental states is localisms (or molecularism). (The view that the content of a mental state depends on none of its relations to other mental states is atomism.) Functional theories that recognize no content-determining external relata have been called solipsistic (Harman 1987). Some theorists posit distinct roles for internal and external connections, the former determining semantic properties analogous to sense, the latter determining semantic properties analogous to reference (McGinn 1982, Sterelny 1989)

(Reductive) representationalists (Dretske, Lycan, Tye) usually take one or another of these theories to provide an explanation of the (Non-conceptual) content of experiential states. They thus tend to be Externalists, about phenomenological as well as conceptual content. Phenomenalists and non-deductive representationalists (Block, Chalmers, Loar, Peacocke, Siewert), on the other hand, take it that the representational content of such states is (at least in part) determined by their intrinsic phenomenal properties. Further, those who advocate a phenomenology-based approach to conceptual content (Horgan and Tiensen, Loar, Pitt, Searle, Siewert) also seem to be committed to Internalist individuation of the content (if not the reference) of such states.

Generally, those who, like informational theorists, think relations to one's (natural or social) environment are (at least partially) determinative of the content of mental representations are Externalists (e.g., Burge 1979, 1986, McGinn 1977, Putnam 1975), whereas those who, like some proponents of functional theories, think representational content is determined by an individual's intrinsic properties alone, are internalists (or individualists).

This issue is widely taken to be of central importance, since psychological explanation, whether commonsense or scientific, is supposed to be both causal and content-based. (Beliefs and desires cause the behaviours they do because they have the contents they do. For example, the desire that one have a beer and the beliefs that there is beer in the refrigerator and that the refrigerator is in the kitchen may explain one's getting up and going to the kitchen.) If, however, a mental representation's having a particular content is due to factors extrinsic to it, it is unclear how its having that content could determine its causal powers, which, arguably, must be intrinsic. Some who accept the standard arguments for externalism have argued that internal factors determine a component of the content of a mental representation. They say that mental representations have both 'narrow' content (determined by intrinsic factors) and 'wide' or 'broad' content (determined by narrow content plus extrinsic factors). (This distinction may be applied to the sub-personal representations of cognitive science as well as to those of commonsense psychology.

Narrow content has been variously construed. Putnam (1975), Fodor (1982)), and Block (1986) for example, seems to understand it as something like dedictorial content (i.e., Frégean sense, or perhaps character, à la Kaplan 1989). On this construal, narrow content is context-independent and directly expressible. Fodor (1987) and Block (1986), however, has also characterized narrow content as radically inexpressible. On this construal, narrow content is a kind of proto-content, or content-determinant, and can be specified only indirectly, via specifications of context/wide-content pairings. Both, construe of as a narrow content and are characterized as functions from context to (wide) content. The narrow content of a representation is determined by properties intrinsic to it or its possessor such as its syntactic structure or its intra-mental computational or inferential role or its phenomenology.

Burge (1986) has argued that causation-based worries about externalist individuation of psychological content, and the introduction of the narrow notion, are misguided. Fodor (1994, 1998) has more recently urged that there may be no need to narrow its contentual representations, accountable for reasons of an ordering supply of naturalistic (causal) explanations of human cognition and action, since the sorts of cases they were introduced to handle, viz., Twin-Earth cases and Frége cases, are nomologically either impossible or dismissible as exceptions to non-strict psychological laws.

The leading contemporary version of the Representational Theory of Mind, the Computational Theory of Mind, claims that the brain is a kind of computer and that mental processes are computations. According to the computational theory of mind, cognitive states are constituted by computational relations to mental representations of various kinds, and cognitive processes are sequences of such states. The computational theory of mind and the representational theory of mind, may by attempting to explain all psychological states and processes in terms of mental representation. In the course of constructing detailed empirical theories of human and animal cognition and developing models of cognitive processes' implementable in artificial information processing systems, cognitive scientists have proposed a variety of types of mental representations. While some of these may be suited to be mental relata of commonsense psychological states, some - so-called 'subpersonal' or 'sub-doxastic' representations - are not. Though many philosophers believe that computational theory of mind can provide the best scientific explanations of cognition and behaviour, there is disagreement over whether such explanations will vindicate the commonsense psychological explanations of prescientific representational theory of mind.

According to Stich's (1983) Syntactic Theory of Mind, for example, computational theories of psychological states should concern themselves only with the formal properties of the objects those states are relations to. Commitment to the explanatory relevance of content, however, is for most cognitive scientists fundamental. That mental processes are computations, which computations are rule-governed sequences of semantically evaluable objects, and that the rules apply to the symbols in virtue of their content, are central tenets of mainstream cognitive science.

Explanations in cognitive science appeal to a many different kinds of mental representation, including, for example, the 'mental models' of Johnson-Laird 1983, the 'retinal arrays,' 'primal sketches' and '2½ -D sketches' of Marr 1982, the 'frames' of Minsky 1974, the 'sub-symbolic' structures of Smolensky 1989, the 'quasi-pictures' of Kosslyn 1980, and the 'interpreted symbol-filled arrays' of Tye 1991 - in addition to representations that may be appropriate to the explanation of commonsense

Psychological states. Computational explanations have been offered of, among other mental phenomena, belief.

The classicists hold that mental representations are symbolic structures, which typically have semantically evaluable constituents, and that mental processes are rule-governed manipulations of them that are sensitive to their constituent structure. The connectionists, hold that mental representations are realized by patterns of activation in a network of simple processors ('nodes') and that mental processes consist of the spreading activation of such patterns. The nodes themselves are, typically, not taken to be semantically evaluable; nor do the patterns have semantically evaluable constituents. (Though there are versions of Connectionism -, 'localist' versions - on which individual nodes are taken to have semantic properties (e.g., Ballard 1986, Ballard & Hayes 1984).) It is arguable, however, that localist theories are neither definitive nor representative of the Conceptionist program.

Classicists are motivated (in part) by properties thought seems to share with language. Jerry Alan Fodor's (1935-), Language of Thought Hypothesis, (Fodor 1975, 1987), according to which the system of mental symbols constituting the neural basis of thought is structured like a language, provides a well-worked-out version of the classical approach as applied to commonsense psychology. According to the language of a thought hypothesis, the potential infinity of complex representational mental states is generated from a finite stock of primitive representational states, in accordance with recursive formation rules. This combinatorial structure accounts for the properties of productivity and systematicity of the system of mental representations. As in the case of symbolic languages, including natural languages (though Fodor does not suppose either that the language of thought hypotheses explains only linguistic capacities or that only verbal creatures have this sort of cognitive architecture), these properties of thought are explained by appeal to the content of the representational units and their combinability into contentful complexes. That is, the semantics of both language and thought is compositional: the content of a complex representation is determined by the contents of its constituents and their structural configuration.

Connectionists are motivated mainly by a consideration of the architecture of the brain, which apparently consists of layered networks of interconnected neurons. They argue that this sort of architecture is unsuited to carrying out classical serial computations. For one thing, processing in the brain is typically massively parallel. In addition, the elements whose manipulation drive's computation in Conceptionist networks (principally, the connections between nodes) are neither semantically compositional nor semantically evaluable, as they are on the classical approach. This contrast with classical computationalism is often characterized by saying that representation is, with respect to computation, distributed as opposed to local: representation is local if it is computationally basic; and distributed if it is not. (Another way of putting this is to say that for classicists mental representations are computationally atomic, whereas for connectionists they are not.)

Moreover, connectionists argue that information processing as it occurs in Conceptionist networks more closely resembles some features of actual human cognitive functioning. For example, whereas on the classical view learning involves something like hypothesis formation and testing (Fodor 1981), on the Conceptionist model it is a matter of evolving distribution of 'weight' (strength) on the connections between nodes, and typically does not involve the formulation of hypotheses regarding the identity conditions for the objects of knowledge. The Conceptionist network is 'trained up' by repeated exposure to the objects it is to learn to distinguish; and, though networks typically require many more exposures to the objects than do humans, this seems to model at least one feature of this type of human learning quite well.

Further, degradation in the performance of such networks in response to damage is gradual, not sudden as in the case of a classical information processor, and hence more accurately models the loss of human cognitive function as it typically occurs in response to brain damage. It is also sometimes claimed that Conceptionist systems show the kind of flexibility in response to novel situations typical of human cognition - situations in which classical systems are relatively 'brittle' or 'fragile.'

Some philosophers have maintained that Connectionism entails that there are no propositional attitudes. Ramsey, Stich and Garon (1990) have argued that if Conceptionist models of cognition are basically correct, then there are no discrete representational states as conceived in ordinary commonsense psychology and classical cognitive science. Others, however (e.g., Smolensky 1989), hold that certain types of higher-level patterns of activity in a neural network may be roughly identified with the representational states of commonsense psychology. Still others argue that language-of-thought style representation is both necessary in general and realizable within Conceptionist architectures, collect the central contemporary papers in the classicist/Conceptionist debate, and provides useful introductory material as well.

Whereas Stich (1983) accepts that mental processes are computational, but denies that computations are sequences of mental representations, others accept the notion of mental representation, but deny that computational theory of mind provides the correct account of mental states and processes.

Van Gelder (1995) denies that psychological processes are computational. He argues that cognitive systems are dynamic, and that cognitive states are not relations to mental symbols, but quantifiable states of a complex system consisting of (in the case of human beings) a nervous system, a body and the environment in which they are embedded. Cognitive processes are not rule-governed sequences of discrete symbolic states, but continuous, evolving total states of dynamic systems determined by continuous, simultaneous and mutually determining states of the systems components. Representation in a dynamic system is essentially information-theoretic, though the bearers of information are not symbols, but state variables or parameters.

Horst (1996), on the other hand, argues that though computational models may be useful in scientific psychology, they are of no help in achieving a philosophical understanding of the intentionality of commonsense mental states. Computational theory of mind attempts to reduce the intentionality of such states to the intentionality of the mental symbols they are relations to. But, Horst claims, the relevant notion of symbolic content is essentially bound up with the notions of convention and intention. So the computational theory of mind involves itself in a vicious circularity: the very properties that are supposed to be reduced are (tacitly) appealed to in the reduction.

To say that a mental object has semantic properties is, paradigmatically, to say that it may be about, or be true or false of, an object or objects, or that it may be true or false simpliciter. Suppose I think that you took to sniffing snuff. I am thinking about you, and if what I think of you (that they take snuff) is true of you, then my thought is true. According to representational theory of mind such states are to be explained as relations between agents and mental representations. To think that you take snuff is to token in some way a mental representation whose content is that ocelots take snuff. On this view, the semantic properties of mental states are the semantic properties of the representations they are relations to.

Linguistic acts seem to share such properties with mental states. Suppose I say that you take snuff. I am talking about you, and if what I say of you (that they take snuff) is true of them, then my utterance is true. Now, to say that you take snuff is (in part) to utter a sentence that means that you take snuff. Many philosophers have thought that the semantic properties of linguistic expressions are inherited from the intentional mental states they are conventionally used to express. On this view, the semantic properties of linguistic expressions are the semantic properties of the representations that are the mental relata of the states they are conventionally used to express.

It is also widely held that in addition to having such properties as reference, truth-conditions and truth - so-called extensional properties - expressions of natural languages also have intensional properties, in virtue of expressing properties or propositions - i.e., in virtue of having meanings or senses, where two expressions may have the same reference, truth-conditions or truth value, yet express different properties or propositions (Frége 1892/1997). If the semantic properties of natural-language expressions are inherited from the thoughts and concepts they express (or vice versa, or both), then an analogous distinction may be appropriate for mental representations.

Theories of representational content may be classified according to whether they are atomistic or holistic and according to whether they are externalistic or internalistic, whereby, emphasizing the priority of a whole over its parts. Furthermore, in the philosophy of language, this becomes the claim that the meaning of an individual word or sentence can only be understood in terms of its relation to an indefinitely larger body of language, such as à whole theory, or even a whole language or form of life. In the philosophy of mind a mental state similarly may be identified only in terms of its relations with others. Moderate holism may allow the other things besides these relationships also count; extreme holism would hold that a network of relationships is all that we have. A holistic view of science holds that experience only confirms or disconfirms large bodies of doctrine, impinging at the edges, and leaving some leeway over the adjustment that it requires.

Once, again, in the philosophy of mind and language, the view that what is thought, or said, or experienced, is essentially dependent on aspects of the world external to the mind of the subject. The view goes beyond holding that such mental states are typically caused by external factors, to insist that they could not have existed as they now do without the subject being embedded in an external world of a certain kind. It is these external relations that make up the essence or identify of the mental state. Externalism is thus opposed to the Cartesian separation of the mental from the physical, since that holds that the mental could in principle exist as it does even if there were no external world at all. Various external factors have been advanced as ones on which mental content depends, including the usage of experts, the linguistic, norms of the community. And the general causal relationships of the subject. In the theory of knowledge, externalism is the view that a person might know something by being suitably situated with respect to it, without that relationship being in any sense within his purview. The person might, for example, be very reliable in some respect without believing that he is. The view allows that you can know without being justified in believing that you know.

However, atomistic theories take a representation's content to be something that can be specified independent entity of that representation' s relations to other representations. What the American philosopher of mind, Jerry Alan Fodor (1935-) calls the crude causal theory, for example, takes a representation to be a
cow
- a menial representation with the same content as the word 'cow' - if its tokens are caused by instantiations of the property of being-a-cow, and this is a condition that places no explicit constraints on how
cow
's must or might relate to other representations. Holistic theories contrasted with atomistic theories in taking the relations à representation bears to others to be essential to its content. According to functional role theories, a representation is a
cow
if it behaves like a
cow
should behave in inference.

Internalist theories take the content of a representation to be a matter determined by factors internal to the system that uses it. Thus, what Block (1986) calls 'short-armed' functional role theories are Internalist. Externalist theories take the content of a representation to be determined, in part at least, by factors external to the system that uses it. Covariance theories, as well as telelogical theories that invoke an historical theory of functions, take content to be determined by 'external' factors. Crossing the atomist-holistic distinction with the Internalist-externalist distinction.

Externalist theories (sometimes called non-individualistic theories) have the consequence that molecule for molecule identical cognitive systems might yet harbour representations with different contents. This has given rise to a controversy concerning 'narrow' content. If we assume some form of externalist theory is correct, then content is, in the first instance 'wide' content, i.e., determined in part by factors external to the representing system. On the other hand, it seems clear that, on plausible assumptions about how to individuate psychological capacities, internally equivalent systems must have the same psychological capacities. Hence, it would appear that wide content cannot be relevant to characterizing psychological equivalence. Since cognitive science generally assumes that content is relevant to characterizing psychological equivalence, philosophers attracted to externalist theories of content have sometimes attempted to introduce 'narrow' content, i.e., an aspect or kind of content that is equivalent internally equivalent systems. The simplest such theory is Fodor's idea (1987) that narrow content is a function from contents (i.e., from whatever the external factors are) to wide contents.

All the same, what a person expresses by a sentence is often a function of the environment in which he or she is placed. For example, the disease I refer to by the term like 'arthritis', or the kind of tree I refer to as a 'Maple' will be defined by criteria of which I know next to nothing. This raises the possibility of imagining two persons in rather different environments, but in which everything appears the same to each of them. The wide content of their thoughts and sayings will be different if the situation surrounding them is appropriately different: 'situation' may include the actual objects they perceive or the chemical or physical kinds of object in the world they inhabit, or the history of their words, or the decisions of authorities on what counts as an example, of one of the terms they use. The narrow content is that part of their thought which remains identical, through their identity of the way things appear, regardless of these differences of surroundings. Partisans of wide content may doubt whether any content in this sense narrow, partisans of narrow content believer that it is the fundamental notion, with wide content being explicable in terms of narrow content plus context.

Even so, the distinction between facts and values has outgrown its name: it applies not only to matters of fact vs, matters of value, but also to statements that something is, vs. statements that something ought to be. Roughly, factual statements - 'is statements' in the relevant sense - represent some state of affairs as obtaining, whereas normative statements - evaluative, and deontic ones - attribute goodness to something, or ascribe, to an agent, an obligation to act. Neither distinction is merely linguistic. Specifying a book's monetary value is making a factual statement, though it attributes a kind of value. 'That is a good book' expresses a value judgement though the term 'value' is absent (nor would 'valuable' be synonymous with 'good'). Similarly, 'we are morally obligated to fight' superficially expresses a statement, and 'By all indications it ought to rain' makes a kind of ought-claim; but the former is an ought-statement, the latter an (epistemic) is-statement.

Theoretical difficulties also beset the distinction. Some have absorbed values into facts holding that all value is instrumental, roughly, to have value is to contribute - in a factual analysable way - to something further which is (say) deemed desirable. Others have suffused facts with values, arguing that facts (and observations) are 'theory-impregnated' and contending that values are inescapable to theoretical choice. But while some philosophers doubt that fact/value distinctions can be sustained, there persists a sense of a deep difference between evaluating, and attributing an obligation and, on the other hand, saying how the world is.

Fact/value distinctions, may be defended by appeal to the notion of intrinsic value, as a thing has in itself and thus independently of its consequences. Roughly, a value statement (proper) is an ascription of intrinsic value, one to the effect that a thing is to some degree good in itself. This leaves open whether ought-statements are implicitly value statements, but even if they imply that something has intrinsic value - e.g., moral value - they can be independently characterized, say by appeal to rules that provide (justifying) reasons for action. One might also ground the fact value distinction in the attributional (or even motivational) component apparently implied by the making of valuational or deontic judgements: Thus, 'it is a good book, but that is no reason for a positive attribute towards it' and 'you ought to do it, but there is no reason to' seem inadmissible, whereas, substituting, 'an expensive book' and 'you will do it' yields permissible judgements. One might also argue that factual judgements are the kind which are in principle appraisable scientifically, and thereby anchor the distinction on the factual side. This ligne is plausible, but there is controversy over whether scientific procedures are 'value-free' in the required way.

Philosophers differ regarding the sense, if any, in which epistemology is normative (roughly, valuational). But what precisely is at stake in this controversy is no clearly than the problematic fact/value distinction itself. Must epistemologists as such make judgements of value or epistemic responsibility? If epistemology is naturalizable, then even epistemic principles simply articulate under what conditions - say, appropriate perceptual stimulations - a belief is justified, or constitutes knowledge. Its standards of justification, then would be like standards of, e.g., resilience for bridges. It is not obvious, however, that there appropriate standards can be established without independent judgements that, say, a certain kind of evidence is good enough for justified belief (or knowledge). The most plausible view may be that justification is like intrinsic goodness, though it supervenes on natural properties, it cannot be analysed wholly in factual statements.

Thus far, belief has been depicted as being all-or-nothing, however, as a resulting causality for which we have grounds for thinking it true, and, all the same, its acceptance is governed by epistemic norms, and, least of mention, it is partially subject to voluntary control and has functional affinities to belief. Still, the notion of acceptance, like that of degrees of belief, merely extends the standard picture, and does not replace it.

Traditionally, belief has been of epistemological interest in its propositional guise: 'S' believes that 'p', where 'p' is a reposition towards which an agent, 'S' exhibits an attitude of acceptance. Not all belief is of this sort. If I trust you to say, I believer you. And someone may believe in Mr. Radek, or in a free-market economy, or in God. It is sometimes supposed that all belief is 'reducible' to propositional belief, belief-that. Thus, my believing you might be thought a matter of my believing, is, perhaps, that what you say is true, and your belief in free markets or God, is a matter of your believing that free-market economies are desirable or that God exists.

Some philosophers have followed St. Thomas Aquinas (1225-74), in supposing that to believer in God is simply to believer that certain truths hold while others argue that belief-in is a distinctive attitude, on that includes essentially an element of trust. More commonly, belief-in has been taken to involve a combination of propositional belief together with some further attitude.

The moral philosopher Richard Price (1723-91) defends the claim that there are different sorts of belief-in, some, but not all reducible to beliefs-that. If you believer in God, you believer that God exists, that God is good, you believer that God is good, etc. But according to Price, your belief involves, in addition, a certain complex pro-attitude toward its object. Even so, belief-in outruns the evidence for the corresponding belief-that. Does this diminish its rationality? If belief-in presupposes believes-that, it might be thought that the evidential standards for the former must be, at least, as high as standards for the latter. And any additional pro-attitude might be thought to require a further layer of justification not required for cases of belief-that.

Belief-in may be, in general, less susceptible to alternations in the face of unfavourable evidence than belief-that. A believer who encounters evidence against God's existence may remain unshaken in his belief, in part because the evidence does not bear on his pro-attitude. So long as this ids united with his belief that God exists, and reasonably so - in a way that an ordinary propositional belief that would not.

The correlative way of elaborating on the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. In this context, the most widely discussed examples have to do with possible occult cognitive capacities, like clairvoyance. Considering the point in application once, again, to reliabilism, the claim is that to think that he has such a cognitive power, and, perhaps, even good reasons to the contrary, is not rational or responsible and therefore not epistemically justified in accepting the belief that result from his clairvoyance, despite the fact that the reliablist condition is satisfied.

One sort of response to this latter sorts of an objection is to 'bite the bullet' and insist that such believers are in fact justified, dismissing the seeming intuitions to the contrary as latent Internalist prejudice. A more widely adopted response attempts to impose additional conditions, usually of a roughly Internalist sort, which will rule out the offending example, while stopping far of a full internalism. But, while there is little doubt that such modified versions of externalism can handle particular cases, as well enough to avoid clear intuitive implausibility, the usually problematic cases that they cannot handle, and also whether there is and clear motivation for the additional requirements other than the general Internalist view of justification that externalist is committed to reject.

A view in this same general vein, one that might be described as a hybrid of internalism and externalism holds that epistemic justification requires that there is a justificatory factor that is cognitively accessible to the believer in question (though it need not be actually grasped), thus ruling out, e.g., a pure reliabilism. At the same time, however, though it must be objectively true that beliefs for which such a factor is available are likely to be true, in addition, the fact need not be in any way grasped or cognitively accessible to the believer. In effect, of the premises needed to argue that a particular belief is likely to be true, one must be accessible in a way that would satisfy at least weak internalism, the Internalist will respond that this hybrid view is of no help at all in meeting the objection and has no belief nor is it held in the rational, responsible way that justification intuitively seems to require, for the believer in question, lacking one crucial premise, still has no reason at all for thinking that his belief is likely to be true.

An alternative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., a result of a reliable process (and perhaps, further conditions as well). This makes it possible for such a view to retain Internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.

Such an externalist account of knowledge can accommodate the commonsense conviction that animals, young children, and unsophisticated adults' posse's knowledge, though not the weaker conviction (if such a conviction does exist) that such individuals are epistemically justified in their beliefs. It is, at least, less vulnerable to Internalist counter-examples of the sort discussed, since the intuitions involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge, for which is accepted or advanced as true or real on the basis of less than conclusive evidence, as can only be assumed to have. In particular, does it have any serious bearing on traditional epistemological problems and on the deepest and most troubling versions of scepticism, which seems in fact to be primarily concerned with justification, and knowledge?`

A rather different use of the terms 'internalism' and 'externalism' have to do with the issue of how the content of beliefs and thoughts is determined: According to an Internalist view of content, the content of such intention states depends only on the non-relational, internal properties of the individual's mind or grain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors and suggests a view that appears of both internal and external elements are standardly classified as an external view.

As with justification and knowledge, the traditional view of content has been strongly Internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexical, etc. that motivate the views that have come to be known as 'direct reference' theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criterion employed by expects in his social group, etc. - not just on what is going on internally in his mind or brain.

An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought 'from the inside', simply by reflection. If content is depending on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification, apart from all contentful representation is a belief inaccessible to the believer, then both the justifying statuses of other beliefs in relation to that of the same representation are the status of that content, being totally rationalized by further beliefs for which it will be similarly inaccessible. Thus, contravening the Internalist requirement for justification, as an Internalist must insist that there are no justification relations of these sorts, that our internally associable content can also not be warranted or as stated or indicated without the deviated departure from a course or procedure or from a norm or standard in showing no deviation from traditionally held methods of justification exacting by anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalised account of content is mistaken.

Except for alleged cases of thing s that are evident for one just by being true, it has often been thought, anything is known must satisfy certain criteria as well as being true. Except for alleged cases of self-evident truths, it is often thought that anything that is known must satisfy certain criteria or standards. These criteria are general principles that will make a proposition evident or just make accepting it warranted to some degree. Common suggestions for this role include position ‘p’, e.g., that 2 + 2 = 4, ‘p’ is evident or, if ‘p’ coheres wit h the bulk of one’s beliefs, ‘p’ is warranted. These might be criteria whereby putative self-evident truths, e.g., that one clearly and distinctly conceive s ‘p’, ‘transmit’ the status as evident they already have without criteria to other proposition s like ‘p’, or they might be criteria whereby purely non-epistemic considerations, e.g., facts about logical connections or about conception that need not be already evident or warranted, originally ‘create’ p’s epistemic status. If that in turn can be ‘transmitted’ to other propositions, e.g., by deduction or induction, there will be criteria specifying when it is.

Nonetheless, of or relating to tradition a being previously characterized or specified to convey an idea indirectly, as an idea or theory for consideration and being so extreme a design or quality and lean towards an equating suggestion that implicate an involving responsibility that include: (1) if a proposition ‘p’, e.g., that 2 + 2 = 4, is clearly and distinctly conceived, then ‘p’ is evident, or simply, (2) if we can’t conceive ‘p’ to be false, then ‘p’ is evident: Or, (3) whenever are immediately conscious o f in thought or experience, e.g,, that we seem to see red, is evident. These might be criteria whereby putative self-evident truth s, e.g., that one clearly and distinctly conceives, e.g., that one clearly and distinctly conceives ‘p’, ‘transmit’ the status as evident they already have for one without criteria to other propositions like ‘p’. Alternatively, they might be criteria whereby epistemic status, e.g., p’s being evident, is originally created by purely non-epistemic considerations, e.g., facts about how ‘p’ is conceived which are neither self-evident is already criterial evident.

The result effect, holds that traditional criteria do not seem to make evident propositions about anything beyond our own thoughts, experiences and necessary truths, to which deductive or inductive criteria ma y be applied. Moreover, arguably, inductive criteria, including criteria warranting the best explanation of data, never make things evident or warrant their acceptance enough to count as knowledge.

Contemporary epistemologists suggest that traditional criteria may need alteration in three ways. Additional evidence may subject even our most basic judgements to rational correction, though they count as evident on the basis of our criteria. Warrant may be transmitted other than through deductive and inductive relations between propositions. Transmission criteria might not simply ‘pass’ evidence on linearly from a foundation of highly evident ‘premisses’ to ‘conclusions’ that are never more evident.

A group of statements, some of which purportedly provide support for another. The statements which purportedly provide the support are the premisses while the statement purportedly support is the conclusion. Arguments are typically divided into two categories depending on the degree of support they purportedly provide. Deductive arguments purportedly provide conclusive support for their conclusions while inductively supports the purported provision that inductive arguments purportedly provided only arguments purportedly in the providing probably of support. Some, but not all, arguments succeed in providing support for their conclusions. Successful deductive arguments are valid while successful inductive arguments are valid while successful inductive arguments are strong. An argument is valid just in case if all its premisses are true its conclusion is only probably true. Deductive logic provides methods for ascertaining whether or not an argument is valid whereas, inductive logic provides methods for ascertaining the degree of support the premisses of an argument confer on its conclusion.

Finally, proof, least of mention, is a collection of considerations and reasons that instill and sustain conviction that some proposed theorem - the theorem proved - is not only true, but could not possibly be false. A perceptual observation may instill the conviction that water is cold. But a proof that 2 + 5 = 5 must not only instill the conviction that is true that 2 + 3 = 5, but also that 2 + 3 could not be anything but 5.

No one has succeeded in replacing this largely psychological characterization of proofs by a more objective characterization. The representations of reconstructions of proofs as mechanical and semiotical derivation in formal-logical systems all but completely fail to capture ‘proofs’ as mathematicians are quite content to give them. For example, formal-logical derivations depend solely on the logical form of the considered proposition, whereas usually proofs depend in large measure on content of propositions other than their logical form.

Defeated in two wars, Germany appeared to have invaded vast territories of the world’s mind, with Nietzsche himself as no mean conqueror. For his was the vision of things to come. Much, too much, would strike him as déjà vu: Yes, he had foreseen it, and he would understand, for the ‘Modern Mind’ speaks German, not always good German, but fluent German nonetheless, it was, only forced by learning the idiom of Karl Marx, and was delighted to be introduced to itself in the language of Sigmund Freud’ taught by Rank and later Max Weber, It acquired its historical and sociological self-consciousness, moved out of its tidy Newtonian universe on the instruction of Einstein, and followed a design of Oswald Spengler’s in sending, from the depth of its spiritual depression, most ingeniously engineered objects higher than the moon. Whether it discovers, with Heidegger, the true habitation of its Existenza on the frontier boundaries of Nothing, or mediates, with Sartre and Camus le Néant or the Absurd, whether - to pass to its less serous moods - it is nihilistically young and profitably angry in London or rebelliously debauched and Buddhistic in San Francisco - it is part of a story told by Nietzsche.

As for modern German literature and thought, it is hardly an exaggeration to say that they would not be what they are if Nietzsche had never lived. Name almost any poet, man of letters, philosopher, who wrote in German during the twentieth century and attained to stature and influence - Rilke, George, Kafka, Tomas Mann, Ernst Jünger, Musil, Benn, Heidegger, or Jaspers - and you name at the same time Friedrick Nietzsche. He is too, them all - whether or not they know and acknowledge it (most of them do) - what St. Thomas Aquinas was to Dante: The categorical interpreter of a world that they contemplate poetically or philosophically without ever radically upsetting its Nietzsche an structure.

He was convinced that it would take at least fifty years before a few men would understand what he had accomplished. He feared that even then his teaching would be misinterpreted and misapplied. “I am terrified,” he wrote, “by the thought of the sort of people who may one day invoke my authority.” Yet is this not, he added, the anguish of every great teacher? Still, the conviction that he was a great teacher never left him after he had passed through that period of sustained inspiration in which he wrote the first part of Zarathustra. After this, all his utterances convey the disquieting self-confidence and the terror of a man who has reached the culmination of that paradox that he embodies, and whichever has since cast its dangerous spell over some of the finest and some of the coarsest minds.

Are we then, in a better position to probe Nietzsche’s mind and too avid, as he anticipated some might, the misunderstanding that he was merely concerned with religious, philosophical, or political controversies fashionable in his day? If this is a misinterpretation, can we put anything more valid in its place? What is the knowledge that he claims to have, raising him in his own opinion far above the contemporary level of thought? What the discovery that serves him as a lever to unhinge the whole fabric of traditional values?

It is the knowledge that God is dead.

The death of God he calls the greatest event in modern history and the cause of extreme danger. Its paradoxical place a value may be contained in these words. He never said that there was no God, but that the External had been vanquished by Time and that the immortal suffered death at the hands of mortals: “God is dead.” It is like a cry mingled of despair and triumph, reducing, by comparison, the whole story of atheism and agnosticism before and after him to the level of respectable mediocrity and making it sound like a collection of announcements by bankers who regret they are unable to invest in an unsafe proposition. Nietzsche, for the nineteenth century, brings to its perverse conclusion a line of religious thought and experience linked with the names of St. Paul, St. Augustine, Pascal, Kierkegaard, and Dostoevsky, minds for whom God has his clearly defined place, but to whom. He came in order to challenge their natural being, making demands that appeared absurd in the light of natural reason. These men are of the family of Jacob: Having wrestled with God for His blessing, they ever after limp through life with the framework of Nature incurably out of joint. Nietzsche too believed that he prevailed against God in that struggle, and won a new name for himself, the name of Zarathustra. However, the words he spoke on his mountain to the angel of the Lord? I will not let thee go, but thou curse me. Or, in words that Nietzsche did in fact speak: “I have on purpose devoted my life to exploring the whole contrast to a truly religious nature. I know the Devil and all his visions of God.

“God is dead” - this is the very core of Nietzsche’s spiritual existence, and what follows is despair and hope in a new greatness of man, visions of catastrophe and glory, the icy brilliance of analytical reason, fathoming with affected irreverence those depths through which are hidden of a ritual healer.

Perhaps by definition alone, comes the unswerving call of atheism, by this is the denial of or lack of belief in the existence of a god or gods. The term atheism comes from the Greek prefix ‘a-‘, meaning “without,” and the Greek word ‘theos’, meaning “deity.” The denial of gods’ existence is also known as strong, or positive, atheism, whereas the lack of belief in a god is known as negative, or weak, atheism. Although atheism is often contrasted with agnosticism - the view that we cannot know whether a deity exists or not and should therefore suspend belief - negative atheism is in fact compatible with agnosticism.

About one-third of the world’s population adheres to a form of Christianity. Latin America has the largest number of Christians, most of whom are Roman Catholics. Islam is practised by over one-fifth of the world’s population, most of whom live in parts of Asia, particularly the Middle East.

Atheism has wide-ranging implications for the human condition. In the rendering absence to belief in a god, as, too, ethical goals must be determined by secular and nonreligious aims of concern, human beings must take full responsibility for their destiny, and death marks the end of a person’s existence. As of 1994 there were an estimated 240 million atheists around the world comprising slightly more than 4 percent of the world’s population, including those who profess atheism, skepticism, disbelief, or irreligion. The estimate of nonbelievers increases significantly, to about 21 percent of the world’s population, if negative atheists are included.

From ancient times, people have at times used atheism as a term of abuse for religious positions they opposed. The first Christians were called atheists because they denied the existence of the Roman deities. Over time, several misunderstandings of atheism have arisen: That atheists are immoral, that morality cannot be justified without belief in God, and that life has no purpose without belief in God. Yet there is no evidence that atheists are any less moral than believers. Many systems of morality have been developed that do not presuppose the existence of a supernatural being. Moreover, the purpose of human life may be based on secular goals, such as the betterment of humankind.

In Western society the term atheism has been used more narrowly to refer to the denial of theism, in particular Judeo-Christian theism, which asserts the existence of an all-powerful, all-knowing, all-good personal being. This being created the universe, took an active interest in human concerns, and guides his creatures through divine disclosure known as revelation. Positive atheists reject this theistic God and the associated beliefs in an afterlife, a cosmic destiny, a supernatural origin of the universe, an immortal soul, the revealed nature of the Bible and the Qur'an (Koran), and a religious foundation for morality.

Theism, however, is not a characteristic of all religions. Some religions reject theism but are not entirely atheistic. Although the theistic tradition is fully developed in the Bhagavad-Gita, the sacred text of Hinduism, earlier Hindu writings known as the Upanishads teach that Brahman (ultimate reality) is impersonal. Positive atheists reject even the pantheistic aspects of Hinduism that equate God with the universe. Several other Eastern religions, including Theravada Buddhism and Jainism, are commonly believed to be atheistic, but this interpretation is not strictly correct. These religions do reject a theistic God believed to have created the universe, but they accept numerous lesser gods. At most, such religions are atheistic in the narrow sense of rejecting theism.

One of the most controversial works of 19th-century philosophy, Thus Spake Zarathustra 1883-1885, articulated German philosopher Friedrich Nietzsche’s theory of the Übermensch, a term translated as “Superman” or “Overman.” The Superman was an individual who overcame what Nietzsche termed the “slave morality” of traditional values, and lived according to his own morality. Nietzsche also advanced his idea that “God is dead,” or that traditional morality was no longer relevant in people’s lives. In this passage, the sage Zarathustra came down from the mountain where he had spent the last ten years alone to preach to the people.

In the Western intellectual world, nonbelief in the existence of God is a widespread phenomenon with a long and distinguished history. Philosophers of the ancient world such as Lucretius were nonbelievers. Even in the Middle Ages (5th to 15th centuries) there were currents of thought that questioned theist assumptions, including skepticism, the doctrine that true knowledge is impossible, and naturalism, the belief that only natural forces control the world. Several leading thinkers of the Enlightenment (1700-1789) were professed atheists, including Danish writer Baron Holbach and French encyclopedist Denis Diderot. Expressions of nonbelief also are found in classics of Western literature, including the writings of English poets Percy Shelley and Lord Byron, the English novelist Thomas Hardy, including French philosophers’ Voltaire and Jean-Paul Sartre, the Russian author Ivan Turgenev, and the American writers’ Mark Twain and Upton Sinclair. In the 19th century the most articulate and best-known atheists and critics of religion were German philosophers’ Ludwig Feuerbach, Karl Marx, Arthur Schopenhauer, and Friedrich Nietzsche. British philosopher Bertrand Russell, Austrian psychoanalyst Sigmund Freud, and Sartre are among the 20th century’s most influential atheists.

Nineteenth-century German philosopher Friedrich Nietzsche was an influential critic of religious systems, especially Christianity, for which he felt chained to the thickening herd morality. By declaring that “God is dead,” Nietzsche signified that traditional religious belief in God no longer played a central role in human experience. Nietzsche believed we would have to find secular justifications for morality to avoid nihilism - the absence of all belief.

Atheists justify their philosophical position in several different ways. Negative atheists attempt to establish their position by refuting typical theist arguments for the existence of God, such as the argument from first cause, the argument from design, the ontological argument, and the argument from religious experience. Other negative atheists assert that any statement about God is meaningless, because attributes such as all-knowing and all-powerful cannot be comprehended by the human mind. Positive atheists, on the other hand, defend their position by arguing that the concept of God is inconsistent. They question, for example, whether a God who is all-knowing can also be all-good and how a God who lacks bodily existence can be all-knowing.

Some positive atheists have maintained that the existence of evil makes the existence of God improbable. In particular, atheists assert that theism does not provide an adequate explanation for the existence of seemingly gratuitous evil, such as the suffering of innocent children. Theists commonly defend the existence of evil by claiming that God desires that human beings have the freedom to choose between good and evil, or that the purpose of evil is to build human character, such as the ability to persevere. Positive atheists counter that justifications for evil in terms of human free will leave unexplained why, for example, children suffer because of genetic diseases or abuse from adults. Arguments that God allows pain and suffering to build human character fail, in turn, to explain why there was suffering among animals before human beings evolved and why human character could not be developed with less suffering than occurs in the world. For atheists, a better explanation for the presence of evil in the world is that God does not exist.

Atheists have also criticized historical evidence used to support belief in the major theistic religions. For example, atheists have argued that a lack of evidence casts doubt on important doctrines of Christianity, such as the virgin birth and the resurrection of Jesus Christ. Because such events are said to represent miracles, atheists assert that extremely strong evidence is necessary to support their occurrence. According to atheists, the available evidence to support these alleged miracles - from Biblical, pagan, and Jewish sources - is weak, and therefore such claims should be rejected.

Atheism is primarily a reaction to, or a rejection of, religious belief, and thus does not determine other philosophical beliefs. Atheism has sometimes been associated with the philosophical ideas of materialism, which holds that only matter exists. Communism, which asserts that religion impedes human progress, and rationalism, which emphasizes analytic reasoning over other sources of knowledge. However, there is no necessary connection between atheism and these positions. Some atheists have opposed communism and some have rejected materialism. Although nearly all contemporary materialists are atheists, the ancient Greek materialist Epicurus believed the gods were made of matter in the form of atoms. Rationalists such as French philosopher René Descartes have believed in God, whereas atheists such as Sartre are not considered to be rationalists. Atheism has also been associated with systems of thought that reject authority, such as anarchism, a political theory opposed to all forms of government, and existentialism, a philosophic movement that emphasizes absolute human freedom of choice; there is however no necessary connection between atheism and these positions. British analytic philosopher A.J. Ayer was an atheist who opposed existentialism, while Danish philosopher Søren Kierkegaard was an existentialist who accepted God. Marx was an atheist who rejected anarchism while Russian novelist Leo Tolstoy, a Christian, embraced anarchism. Because atheism in a strict sense is merely a negation, it does not provide a comprehensive world-view. Presuming other philosophical positions to be outgrowths of atheism is therefore not possible.

Intellectual debate over the existence of God continues to be active, especially on college campuses, in religious discussion groups, and in electronic forums on the Internet. In contemporary philosophical thought, atheism has been defended by British philosopher Antony Flew, Australian philosopher John Mackie, and American philosopher Michael Martin, among others. Leading organizations of unbelief in the United States include The American Atheists, The Committee for the Scientific Study of Religion.

Friedrich Nietzsche (1844-1900), German philosopher, poet, and classical philologist, who was one of the most provocative and influential thinkers of the 19th century. Nietzsche founded his morality on what he saw as the most basic human drive, the will to power. Nietzsche criticized Christianity and other philosophers’ moral systems as “slave moralities” because, in his view, they chained all members of society with universal rules of ethics. Nietzsche offered, in contrast, a “master morality” that prized the creative influence of powerful individuals who transcended the common rules of society.

Nietzsche studied classical philology at the universities of Bonn and Leipzig and was appointed the professor of classical philology at the University of Basel at the age of 24. Ill health (he was plagued throughout his life by poor eyesight and migraine headaches) forced his retirement in 1879. Ten years later he suffered a mental breakdown from which he never recovered. He died in Weimar in 1900.

In addition to the influence of Greek culture, particularly the philosophies of Plato and Aristotle, Nietzsche was influenced by German philosopher Arthur Schopenhauer, by the theory of evolution, and by his friendship with German composer Richard Wagner.

Nietzsche’s first major work, Die Geburt der Tragödie aus dem Geiste de Musik (The Birth of Tragedy), appeared in 1872. His most prolific period as an author was the 1880s. During the decade he wrote, Also sprach Zarathustra (Parts one-3, 1883-1884; Part four-4, 1885, and translated to English as, Thus Spake Zarathustra), Jenseits von Gut und Böse, 1886, Beyond Good and Evil - Zur Genealogie de Moral, 1887, also, On the Genealogy of Morals, and the German, Der Antichrist 1888, the English translation, The Antichrist, and Ecce Homo, was completed 1888, and published 1908. Nietzsche’s last major work, The Will to Power, Der Wille zur Macht, was published in 1901.

One of Nietzsche’s fundamental contentions was that traditional value (represented primarily by Christianity) had lost their power in the lives of individuals. He expressed this in his proclamation “God is dead.” He was convinced that traditional values represented a “slave morality,” a morality created by weak and resentful individuals who encouraged such behaviour as gentleness and kindness because the behaviour served their interests. Nietzsche claimed that new values could be created to replace the traditional ones, and his discussion of the possibility led to his concept of the overman or superman.

According to Nietzsche, the masses (whom he termed the herd or mob) conform to tradition, whereas his ideal overman is secure, independent, and highly individualistic. The overman feels deeply, but his passions are rationally controlled. Concentrating on the real world, than on the rewards of the next world promised by religion, the overman affirms life, including the suffering and pain that accompany human existence. Nietzsche’s overman is a creator of values, a creator of its “master morality” that reflects the strength and independence of one who is liberated from all values, except those that he deems valid.

Nietzsche maintained that all human behaviour is motivated by the will to power. In its positive sense, the will to power is not simply power over others, but the power over one’s self that is necessary for creativity. Such power is manifested in the overman's independence, creativity, and originality. Although Nietzsche explicitly denied that any overmen had yet arisen, he mentions several individuals who could serve as models. Among these models he lists Jesus, Greek philosopher Socrates, Florentine thinker Leonardo da Vinci, Italian artist Michelangelo, English playwright William Shakespeare, German author Johann Wolfgang von Goethe, Roman ruler Julius Caesar, and French emperor Napoleon I.

The concept of the overman has often been interpreted as one that postulates a master-slave society and has been identified with totalitarian philosophies. Many scholars deny the connection and attribute it to misinterpretation of Nietzsche's work.

An acclaimed poet, Nietzsche exerted much influence on German literature, as well as on French literature and theology. His concepts have been discussed and elaborated upon by such individuals as German philosophers Karl Jaspers and Martin Heidegger, and German Jewish philosopher Martin Buber, German American theologian Paul Tillich, and French writers’ Albert Camus and Jean-Paul Sartre. After World War II (1939-1945), American theologians’ Thomas J.J. Altizer and Paul Van Buren seized upon Nietzsche's proclamation “God is dead” in their attempt to make Christianity relevant to its believers in the 1960s and 1970s.

Nietzsche is openly pessimistic about the possibility of knowledge, for truth: we know (or, believe or imagine) just as much as may be useful in the interests of the human herd, the species: and even what is here called ‘utility’, is ultimately also a mere belief, something imaginary and perhaps precisely that most calamitous stupidity of which we shall perish some day.

This position is very radical. Nietzsche does not simply deny that knowledge, construed as the adequate representation of the world by the intellect, exists. He also refuses the pragmatist identification of knowledge and truth with usefulness: he writes that we think we know what we think is useful, and that we can be quite wrong about the latter.

Nietzsche’s view, his ‘perspectivism’, depends on his claim that there is no sensible conception of a world independent of human interpretation and to which interpretations would correspond if they were to constitute knowledge. He sums up this highly controversial position in The Will to Power: Facts are precisely what there is not, only interpretation.

It is often claimed that perspectivism is self-undermining, if the thesis that all views are interpretations is true then, it is argued, there is at least one view that is not an interpretation. If, on the other hand, the thesis is itself an interpretation, then there is no reason to believe that it is true, and it follows again, that not every view is an interpretation.

Nevertheless, this refutation assumes that if a view of perspectivism itself, is an interpretation that it is wrong. This is not the case, to call any view, including perspectivism. An interpretation is to say that it can be wrong, which is true of all views, and that is not a sufficient refutation. To show the perspectivism is actually false producing another view superior to it on specific epistemological grounds is necessary.

Perspectivism does not deny that particular views can be true. Like some versions of contemporary anti-realism, only by its attributes to specific approaches’ truth in relation to facts specified internally by the approaches themselves. Nonetheless, it refuses to envisage a single independent set of facts, to be accounted for by all theories. Thus Nietzsche grants the truth of specific scientific theories, he does, nevertheless, deny that a scientific interpretation can possibly be ‘the only justifiable interpretation of the world’, neither the fact’s science addresses nor the methods it employs are privileged. Scientific theories serve the purpose for which they have been devised, but these have no priority over the many other purposes of human life.

The existence of many purposes and needs relative to which the value of theories is established - another crucial element of perspectivism - is sometimes thought to imply a lawless relativism. According to which no standards for evaluating purposes and theories can be devised. This is correct only in that Nietzsche denies the existence of a single set of standards for determining epistemic value once and for all. However, he holds that specific views can be compared with and evaluated in relation to one another. The ability to use criteria acceptable in particular circumstances does not presuppose the existence of criteria applicable in all. Agreement is therefore, not always possible, since individuals may sometimes differ over the most fundamental issues dividing them.

The historical notion in the unity of consciousness has had an interesting history in philosophy and psychology. Taking Descartes to be the first major philosopher of the modern period, the unity of consciousness was central to the study of the mind for the whole of the modern period until the 20th century. The notion figured centrally in the work of Descartes, Leibniz, Hume, Reid, Kant, Brennan, James, and, in most of the major precursors of contemporary philosophy of mind and cognitive psychology. It played a particularly important role in Kant's work.

A couple of examples will illustrate the role that the notion of the unity of consciousness played in this long literature. Consider a classical argument for dualism (the view that the mind is not the body, indeed is not made out of matter at all). It starts like this: When I consider the mind, which is to say of myself, as far as I am only a thinking thing, I cannot distinguish in myself any parts, but apprehend myself to be clearly one and entire.

Here is another, more elaborate argument based on unified consciousness. The conclusion will be that any system of components could never achieve unified consciousness acting in concert. William James' well-known version of the argument starts as follows: Take a sentence of a dozen words, take twelve men, and to each word. Then stand the men in a row or jam them in a bunch, and let each think of his word as intently as he will; Nowhere will there be a consciousness of the whole sentence.

James generalizes this observation to all conscious states. To get dualism out of this, we need to add a premise: That if the mind were made out of matter, conscious states would have to be distributed over some group of components in some relevant way. Nevertheless, this thought experiment is meant to show that conscious states cannot be so distributed. Therefore, the conscious mind is not made out of matter. Calling the argument that James is using is the Unity Argument. Clearly, the idea that our consciousness of, here, the parts of a sentence are unified is at the centre of the Unity Argument. Like the first, this argument goes all the way back to Descartes. Versions of it can be found in thinkers otherwise as different from one another as Leibniz, Reid, and James. The Unity Argument continued to be influential into the 20th century. That the argument was considered a powerful reason for concluding that the mind is not the body is illustrated in a backhanded way by Kant's treatment of it (as he found it in Descartes and Leibniz, not James, of course).

Kant did not think that we could uncover anything about the nature of the mind, including whether nor is it made out of matter. To make the case for this view, he had to show that all existing arguments that the mind is not material do not work and he set out to do justly that in the Critique of Pure Reason on the Paralogisms of Pure Reason (1781) (paralogisms are faulty inferences about the nature of the mind). The Unity Argument is the target of a major part of that chapter; if one is going to show that we cannot know what the mind is like, we must dispose of the Unity Argument, which purports to show that the mind is not made out of matter. Kant's argument that the Unity Argument does not support dualism is simple. He urges that the idea of unified consciousness being achieved by something that has no parts or components are no less mysterious than its being achieved by a system of components acting together. Remarkably enough, though no philosopher has ever met this challenge of Kant's and no account exists of what an immaterial mind not made out of parts might be like, philosophers continued to rely on the Unity Argument until well into the 20th century. It may be a bit difficult for us to capture this now but the idea any system of components, and for an even stronger reason might not realize that merge with consciousness, that each system of material components, had a strong intuitive appeal for a long time.

The notion that consciousness agrees to unification and was in addition central to one of Kant's own famous arguments, his ‘transcendental deduction of the categories’. In this argument, boiled down to its essentials, Kant claims that to tie various objects of experience together into a single unified conscious representation of the world, something that he simply assumed that we could do, we could probably apply certain concepts to the items in question. In particular we have to apply concepts from each of four fundamental categories of concept: Quantitative, qualitative, relational, and what he called ‘modal’ concepts. Modal concept’s concern of whether an item might exist, does exist, or must exist. Thus, the four kinds of concept are concepts for how many units, what features, what relations to other objects, and what existence status is represented in an experience.

It was relational conceptual representation that most interested Kant and of relational concepts, he thought the concept of cause-and-effect to be by far the most important. Kant wanted to show that natural science (which for him meant primarily physics) was genuine knowledge (he thought that Hume's sceptical treatment of cause and effect relations challenged this status). He believed that if he could prove that we must tie items in our experience together causally if we are to have a unified awareness of them, he would have put physics back on "the secure path of a science.” The details of his argument have exercised philosophers for more than two hundred years. We will not go into them here, but the argument illustrates how central the notion of the unity of consciousness was in Kant's thinking about the mind and its relation to the world.

Although the unity of consciousness had been at the centre of pre-20th century research on the mind, early in the 20th century the notion almost disappeared. Logical atomism in philosophy and behaviourism in psychology were both unsympathetic to the notion. Logical atomism focussed on the atomic elements of cognition (sense data, simple propositional judgments, etc.), rather than on how these elements are tied together to form a mind. Behaviourism urged that we focus on behaviour, the mind being alternatively myth or something otherwise that we cannot and do not need of studying the mysteriousness of science, from which brings meaning and purpose to humanity. This attitude extended to consciousness, of course. The philosopher Daniel Dennett summarizes the attitude prevalent at the time this way: Consciousness may be the last bastion of occult properties, epiphenomena, immeasurable subjective states - in short, the one area of mind best left to the philosophers. Let them make fools of themselves trying to corral the quicksilver of ‘phenomenology’ into a respectable theory.

The unity of consciousness next became an object of serious attention in analytic philosophy only as late as the 1960s. In the years since, new work has appeared regularly. The accumulated literature is still not massive but the unity of consciousness has again become an object of serious study. Before we examine the more recent work, we need to explicate the notion in more detail than we have done so far and introduce some empirical findings. Both are required to understand recent work on the issue.

To expand on our earlier notion of the unity of consciousness, we need to introduce a pair of distinctions. Current works on consciousness labours under a huge, confusing terminology. Different theorists exchange dialogue over the excess consciousness, phenomenal consciousness, self-consciousness, simple consciousness, creature consciousness, states consciousness, monitoring consciousness, awareness as equated with consciousness, awareness distinguished from consciousness, higher orders thought, higher orders experience, Qualia, the felt qualities of representations, consciousness as displaced perception, . . . and on and on and on. We can ignore most of this profusion but we do need two distinctions: between consciousness of objects and consciousness of our representations of objects, and between consciousness of representations and consciousness of self.

It is very natural to think of self-consciousness or, cognitive state more accurately, as a set of cognitive states. Self-knowledge is an example of such a cognitive state. There are plenty of things that I know bout self. I know the sort of thing I am: a human being, a warm-blooded rational animal with two legs. I know of many properties and much of what is happening to me, at both physical and mental levels. I also know things about my past, things I have done and that of whom I have been with other people I have met. But I have many self-conscious cognitive states that are not instances of knowledge. For example, I have the capacity to plan for the future - to weigh up possible courses of action in the light of goals, desires, and ambitions. I am capable of ca certain type of moral reflection, tide to moral self-and understanding and moral self-evaluation. I can pursue questions like, what sort of person I am? Am I the sort of person I want to be? Am I the sort of individual that I ought to be? This is my ability to think about myself. Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employing in my thought about other people and other objects.

When I say that I am a self-conscious creature, I am saying that I can do all these things. But what do they have in common? Could I lack some and still be self-conscious? These are central questions that take us to the heart of many issues in metaphysics, the philosophy of mind, and the philosophy of psychology.

And, if, in at all, a straightforward explanation to what makes those of the “self contents” immune to error through misidentification concerning the semantics of self, then it seems fair to say that the problem of self-consciousness has been dissolved, at least as much as solved.

This proposed account would be on a par with other noted examples as such as the redundancy theory of truth. That is to say, the redundancy theory or the deflationary view of truth claims that the predicate ‘ . . . true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophic enquiry. The approach admits of different versions, but centres on the pints (1) that ‘it is true that p’ says no more nor less than ‘p’ (so, redundancy”) (2) that in less direct context, such as ‘everything he said was true’, or ‘all logical consequences of true propositions as true’, the predicated functions as a device enabling us to generalize rather than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from true propositions. For example, its translation is to infer that: (œ p, Q)(P & p
q
q)’ where there is no use of a notion of truth.

There are technical problems in interpreting all uses of the notion of truth in such ways, but they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as . . . ‘science aims at the truth’ or ‘truth is a norm governing discourse. Indeed, postmodernist writing frequently advocates that we must abandon such norms, along with a discredited ‘objective’ concept ion of truth. But perhaps, we can have the norms even when objectivity is problematic, since they can be framed within mention of truth: Science wants to be so that whenever science holds that ‘p’, when ‘p’‘. Discourse is to be regulated by the principle that it is wrong to assert ‘p’. When not-p.

Confronted with the range of putatively self-conscious cognitive states, one might assume that there is a single ability that is presupposed. This is my ability to think about myself, and I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for auto-graphical memories and moral self-understanding. These are ways of thinking about myself.

Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employ in my thoughts about other people and other objects. My knowledge that I am a human being deploys certain conceptual abilities that I can also deploy in thinking that you are a human being. The same holds when I congratulate myself for satisfying the exacting moral standards of autonomous moral agencies. This involves concepts and descriptions that can apply equally to me and to others. On the other hand, when I think about myself, I am also putting to work an ability that I cannot put to work in thinking about other people and other objects. This is precisely the ability to apply those concepts and descriptions to myself. It has become common to refer to this ability as the ability to entertain “I’-thoughts.

What is an, “I”-thought?” Obviously, an “I”-thought is a thought that involves self-reference. I can think an “I”-thought only by thinking about myself. Equally obvious, though, this cannot be all that there is to say on the subject. I can think thoughts that involve a self-reference but am not “I”-thoughts. Suppose I think that the next person to set a parking ticket in the centre of Toronto deserves everything he gets. Unbeknown to be, the very next recipient of a parking ticket will be me. This makes my thought self-referencing, but it does not make it an “I”-thought. Why not? The answer is simply that I do not know that I will be the next person to get a parking ticket in the down-town area in Toronto. Is ‘A’, is that unfortunate person, then there is a true identity statement of the form I = A, but I do not know that this identity holds, I cannot be ascribed the thoughts that I will deserve everything I get? And say I am not thinking genuine “I”-thoughts, because one cannot think a genuine “I”-thought if one is ignorant that one is thinking about one’s self. So it is natural to conclude that “I”-thoughts involve a distinctive type of self-reference. This is the sort of self-reference whose natural linguistic expression is the first-person pronoun “I,” because one cannot be the first-person pronoun without knowing that one is thinking about oneself.

This is still not quite right, however, because thought contents can be specific, perhaps, they can be specified directly or indirectly. That is, all cognitive states to be considered, presuppose the ability to think about oneself. This is not only that they all have to some commonality, but it is also what underlies them all. We can see is more detail what this suggestion amounts to. This claim is that what makes all those cognitive states modes of self-consciousness is the fact that they all have content that can be specified directly by means of the first person pronoun “I” or, indirectly by means of the direct reflexive pronoun “he,” such they are first-person contents.

The class of first-person contents is not a homogenous class. There is an important distinction to be drawn between two different types of first-person contents, corresponding to two different modes in which the first person can be employed. The existence of this distinction was first noted by Wittgenstein in an important passage from The Blue Book: That there are two different cases in the use of the word “I” (or, “my”) which is called “the use as object” and “the use as subject.” Examples of the first kind of use are these” “My arm is broken,” “I have grown six inches,” “I have a bump on my forehead,” “The wind blows my hair about.” Examples of the second kind are: “I see so-and-so,” “I try to lift my arm,” “I think it will rain,” “I have a toothache.”

The explanations given are of the distinction that hinge on whether or not they are judgements that involve identification. However, one can point to the difference between these two categories by saying: The cases of the first category involve the recognition of a particular person, and there is in these cases the possibility of an error, or as: The possibility of can error has been provided for . . . It is possible that, say in an accident, I should feel a pain in my arm, see a broken arm at my side, and think it is mine when really it is my neighbour’s. And I could, looking into a mirror, mistake a bump on his forehead for one on mine. On the other hand, there is no question of recognizing a person when I say I have toothache. To ask “are you sure that its you who have pains?” Its one and only, would be nonsensical.

Wittgenstein is drawing a distinction between two types of first-person contents. The first type, which is describes as invoking the use of “I” as object, can be analysed in terms of more basic propositions. Such that the thought “I am B” involves such a use of “I.” Then we can understand it as a conjunction of the following two thoughts” “a is B” and “I am a.” We can term the former a predication component and the latter an identification component. The reason for braking the original thought down into these two components is precisely the possibility of error that Wittgenstein stresses in the second passages stated. One can be quite correct in predicating that someone is ‘B’, even though mistaken in identifying oneself as that person.

First-person contents that are immune to error through misidentification can be mistaken, but they do have a basic warrant in virtue of the evidence on which they are based, because the fact that they are derived from such an evidence base is closely linked to the fact that they are immune to error thought misidentification. Of course, there is room for considerable debate about what types of evidence base ae correlated with this class of first-person contents. Seemingly, then, that the distinction between different types of first-person content can be characterized in two different ways. We can distinguish between those first-person contents that are immune to error through misidentification and those that are subject to such error. Alternatively, we can discriminate between first-person contents with an identification component and those without such a component. For purposes rendered, in that these different formulations each pick out the same classes of first-person contents, although in interestingly different ways.

All first-person consent subject to error through misidentification contains an identification component of the form “I am a” and employ of the first-person-pronoun contents with an identification component and those without such a component. in that identification component, does it or does it not have an identification component? Acquitted by the pain of some infinite regress, at some stage we will have to arrive at an employment of the first-person pronoun that does not have to arrive at an employment of the first-person pronoun that does not presuppose an identification component, then, is that any first-person content subject to error through misidentification will ultimately be anchored in a first-person content that is immune to error through misidentification.

It is also important to stress how self-consciousness, and any theory of self-consciousness that accords a serious role in self-consciousness to mastery of the semantics of the first-person pronoun, are motivated by an important principle that has governed much if the development of analytical philosophy. This is the principle that the philosophical analysis of though can only proceed through the principle analysis of language. The principle has been defended most vigorously by Michael Dummett.

Even so, thoughts differ from that is said to be among the contents of the mind in being wholly communicable: It is of the essence of thought that I can convey to you the very thought that I have, as opposed to being able to tell you merely something about what my though is like. It is of the essence of thought not merely to be communicable, but to be communicable, without residue, by means of language. In order to understand thought, it is necessary, therefore, to understand the means by which thought is expressed. Dummett goes on to draw the clear methodological implications of this view of the nature of thought: We communicate thoughts by means of language because we have an implicit understanding of the workings of language, that is, of the principles governing the use of language, it is these principles, which relate to what is open to view in the mind other than via the medium of language that endow our sentences with the senses that they carry. In order to analyse thought, therefore, it is necessary to make explicit those principles, regulating our use of language, which we already implicitly grasp.

Many philosophers would want to dissent from the strong claim that the philosophical analysis of thought through the philosophical analysis of language is the fundamental task of philosophy. But there is a weaker principle that is very widely held as The Thought-Language Principle.

As it stands, the problem between to different roles that the pronoun “he” can play of such oracle clauses. On the one hand, “he” can be employed in a proposition that the antecedent of the pronoun (i.e., the person named just before the clause in question) would have expressed using the first-person pronoun. In such a situation that holds that “he,” is functioning as a quasi-indicator. Then when “he” is functioning as a quasi-indicator, it is written as “he.” Others have described this as the indirect reflexive pronoun. When “he” is functioning as an ordinary indicator, it picks out an individual in such a way that the person named just before the clause of opacity need not realize the identity of himself with that person. Clearly, the class of first-person contents is not homogenous class.

A subject has distinguished self-awareness to the extent that he is able to distinguish himself from the environment and its content. He has been distinguishing psychological self-awareness to the extent that he is able to distinguish himself as a psychological subject within a contract space of other psychological subjects. What does this require? The notion of a non-conceptual point of view brings together the capacity to register one’s distinctness from the physical environment and various navigational capacities that manifest a degree of understanding of the spatial nature of the physical environment. One very basic reason for thinking that these two elements must be considered together emerges from a point made in the richness of the self-awareness that accompanies the capacity to distinguish the self from the environment is directly proportion are to the richness of the awareness of the environment from which the self is being distinguished. So no creature can understand its own distinction from the physical enjoinment without having an independent understanding of the nature of the physical environment, and since the physical environment is essentially spatial, this requires an understanding of the spatial nature of the physical environment. but this cannot be the whole story. It leaves unexplained why an understanding should be required of this particular essential feature of the physical environment. After all, it is also an essential feature of the physical environment that it is composed of an object that has both primary and secondary qualities, but there is a reflection of this in the notion of a non-conceptual point of view. More is needed to understand the significance of spatiality.

The very idea of a perceived objective spatial world brings with it the ideas of the subject for being in the world, which there course of his perceptions due to his changing position in the world and to the more or less stable in the way of the world is. The idea that there is an objective world and the idea that the subject is somewhere cannot be separated, and where he is given by what he can perceive.

But the main criteria of his work is ver much that the dependence holds equally in the opposite direction.

It seems that this general idea can be extrapolated and brought to bar on the notion of a non-conceptual point of view. What binds together the two apparently discrete components of a non-conceptual point of view is precisely the fact that a creature’s self-awareness must be awareness of itself as a spatial bing that acts upon and is acted upon by the spatial world. Evans’s own gloss on how a subject’s self-awareness, is awareness of himself as a spatial being involves the subject’s mastery of a simple theory explaining how the world makes his perceptions as they are, with principles like “I perceive such and such, such and such holds at ‘P’; so (probably) am ‘P’ and “I’’: am ‘P?’, such does not hold at ‘P’, so I cannot really be perceiving such and such, even though it appears that I am” (Evans 1982). This is not very satisfactory, though. If the claim is that the subject must explicitly hold these principles, then it is clearly false. If, on the other hand, the claim is that these are the principles of a theory that a self-conscious subject must tacitly know, then the claim seems very uninformative in the absence of any specification of the precise forms of behaviour that can only be explained by there ascription to a body of tacit knowledge. We need an account of what it is for a subject to be correctly described as possessing such a simple theory of perception. The point however, is simply that the notion of as non-conceptual point of view as presented, can be viewed as capturing, at a more primitive level, precisely the same phenomenon that Evans is trying to capture with his notion of a simple theory of perception.

Moreover, stressing the importance of action and movement indicates how the notion of a non-conceptual point of view might be grounded in the self-specifying in for action to be found in visual perception. By that in thinking particularly of the concept of an affordance so central to Gibsonian theories of perception. One important type of self-specifying information in the visual field is information about the possibilities for action and reaction that the environment affords the perceiver, by which that affordancs are non-conceptual first-person contents. The development of a non-conceptual point of view clearly involves certain forms of reasoning, and clearly, we will not have a full understanding of he the notion of a non-conceptual point of view until we have an explanation of how this reasoning can take place. The spatial reasoning engaged over which this reasoning takes place. The spatial reasoning involved in developing a non-conceptual point of view upon the world is largely a matter of calibrating different affordances into an integrated representation of the world.

In short, any learned cognitive ability is contractible out of more primitive abilities already in existence. There are good reasons to think that the perception of affordance is innate. And so if, the perception of affordances is the key to the acquisition of an integrated spatial representation of the environment via the recognition of affordance symmetries, affordance transitivities, and affordance identities, then it is precisely conceivable that the capacities implicated in an integrated representation of the world could emerge non-mysteriously from innate abilities.

Nonetheless, there are many philosophers who would be prepared to countenance the possibility of non-conceptual content without accepting that to use the theory of non-conceptual content so solve the paradox of self-consciousness. This is ca more substantial task, as the methodology that is adapted rested on the first of the marks of content, namely that content-bearing states serve to explain behaviour in situations where the connections between sensory input and behaviour output cannot be plotted in a law-like manner (the functionalist theory of self-reference). As such, not of allowing that every instance of intentional behaviour where there are no such law-like connections between sensory input and behaviour output needs to be explained by attributing to the creature in question of representational states with first-person contents. Even so, many such instances of intentional behaviour do need to be explained in this way. This offers a way of establishing the legitimacy of non-conceptual first-person contents. What would satisfactorily demonstrate the legitimacy of non-conceptual first-person contents would be the existence of forms of behaviour in paralinguistic or nonlinguistic creatures for which inference to the best understanding or explanation (which in this context includes inference to the most parsimonious understanding, or explanation) demands the ascription of states with non-conceptual first-person contents.

The non-conceptual first-person contents and the pick-up of self-specifying information in the structure of exteroceptive perception provide very primitive forms of non-conceptual self-consciousness, even if forms that can plausibly be viewed as in place of one’s birth or shortly afterward. The dimension along which forms of self-consciousness must be compared is the richest of the conception of the self that they provide. All of which, a crucial element in any form of self-consciousness is how it enables the self-conscious subject to distinguish between self and environment - what many developmental psychologists term self-world dualism. In this sense, self-consciousness is essentially a contrastive notion. One implication of this is that a proper understanding of the richness of the conception that we take into account the richness of the conception of the environment with which it is associated. In the case of both somatic proprioception and the pick-up of self-specifying information in exteroceptive perception, there is a relatively impoverished conception of the environment. One prominent limitation is that both are synchronic than diachronic. The distinction between self and environment that they offer is a distinction that is effective at a time but not over time. The contrast between prospecific and exterospecific invariant to visual perception, for example, provides a way for a creature to distinguish between itself and the world at any given moment, but this is not the same as a conception of oneself as an enduring thing distinguishable over time from an environment that also endures over time.

The notion of a non-conceptual point of view brings together the capacity to register one’s distinctness from the physical environment and various navigational capacities that manifest a degree of understanding of the spatial nature of the physical environment. One very basic reason for thinking that these elements must be considered together emerges from a point made from which the richness of the awareness of the environment from which the self is being distinguished. So no creature can understand its own distinctness from the physical environment without having an independent understanding of the nature of the physical environment, and since the physical environment is essentially spatial, this requires an understanding of the spatial nature of the physical environment. But this cannot be the whole story. It leaves unexplained why an understanding should be required of this particular essential feature of the physical environment. After all, it is also an essential feature of the physical environment that it is composed of objects that have both primary and secondary qualities, but there is no reflection of this in the notion of a non-conceptual point of view. More is needed to understand the significance of spatiality.

The general idea is very powerful, that the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is himself a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be aware that one is a spatial element of the world, and one cannot be aware that one is a spatial element of the world without a grasp of the spatial nature of the world.

The very idea of perceivable, objective spatial wold bings it the idea of the subject for being in the world, with the course of his perceptions due to his changing position in the world and to the more or less stable way the world is. The idea that there is an objective world and the idea that the subject is somewhere cannot be separated, and where he is given by what he can perceive.

One possible reaction to consciousness, is that it is erroneously only because unrealistic and ultimately unwarranted requirements are being placed on what is to count as genuinely self-referring first-person thoughts. Suppose for such an objection will be found in those theories that attempt to explain first-person thoughts in a way that does not presuppose any form of internal representation of the self or any form of self-knowledge. Consciousness arises because mastery of the semantics of the first-person pronoun is available only to creatures capable of thinking first-person thoughts whose contents involve reflexive self-reference and thus, seem to presuppose mastery of the first-person pronoun. If, thought, it can be established that the capacity to think genuinely first-person thoughts does not depend on any linguistic and conceptual abilities, then arguably the problem of circularity will no longer have purchase.

There is no account of self-reference and genuinely first-person thought that can be read in a way that poses just such a direct challenge to the account of self-reference underpinning the conscious. This is the functionalist account, although spoken before, the functionalist view, reflexive self-reference is a completely unmysterious phenomenon susceptible to a functional analysis. Reflexive self-reference is not dependent upon any antecedent conceptual or linguistic skills. Nonetheless, the functionalist account of a reflexive self-reference is deemed to be sufficiently rich to provide the foundation for an account of the semantics of the first-person pronoun. If this is right, then the circularity at which consciousness is at its heart, and can be avoided.

The circularity problems at the root of consciousness arise because mastery of the semantics of the first-person pronoun requires the capacity to think fist-person thoughts whose natural expression is by means of the first-person pronoun. It seems clear that the circle will be broken if there are forms of first-person thought that are more primitive than those that do not require linguistic mastery of the first-person pronoun. What creates the problem of capacity circularity is the thought that we need to appeal to first-person contents in explaining mastery of the first-person pronoun, whereby its containing association with the thought that any creature capable of entertaining first-person contents will have mastered the first-person pronoun. So if we want to retain the thought that mastery of the first-person pronoun can only be explained in terms of first-person contents, capacity circularity can only be avoided if there are first-person contents that do not presuppose mastery of the first-person pronoun.

On the other hand, however, it seems to follow from everything earlier mentioned about “I”-thoughts that conscious thought in the absence of linguistic mastery of the first-person pronoun is a contradiction in terms. First-person thoughts have first-person contents, where first-person contents can only be specified in terms of either the first-person pronoun or the indirect reflexive pronoun. So how could such thoughts be entertained by a thinker incapable of a reflexive self-reference? How can a thinker who is not capable of reflexively reference? How can a thinker who is not the first-person pronoun be plausibly ascribed thoughts with first-person contents? The thought that, despite all this, there are real first-person contents that do not presuppose mastery of the first-person pronoun is at the core of the functionalist theory of self-reference and first-person belief.

The best developed functionalist theory of self-reference has been provided by Hugh Mellor (1988-1089). The basic phenomenon he is interested in explaining is what it is for a creature to have what he terms a “subjective belief,” that is to say, a belief whose content is naturally expressed by a sentence in the first-person singular and the present tense. The explanation of subjective belief that he offers makes such beliefs independent of both linguistic abilities and conscious beliefs. From this basic account he constructs an account of conscious subjective beliefs and the of the reference of the first-person pronoun “I.” These putatively more sophisticated cognitive states are casually derivable from basic subjective beliefs.

Historically, Heidegger' theory of spatiality distinguishes three different types of space: (1) world-space, (2) regions (Gegend), and (3) Dasein's spatiality. What Heidegger calls "world-space" is space conceived as an “arena” or “container” for objects. It captures both our ordinary conception of space and theoretical space - in particular absolute space. Chairs, desks, and buildings exist “in” space, but world-space is independent of such objects, much like absolute space “in which” things exist. However, Heidegger thinks that such a conception of space is an abstraction from the spatial conduct of our everyday activities. The things that we deal with are near or far relative to us; according to Heidegger, this nearness or farness of things is how we first become familiar with that which we (later) represented to ourselves as "space." This familiarity with which are rendered the understanding of space (in a "container" metaphor or in any other way) possible. It is because we act spatially, going to places and reaching for things to use, that we can even develop a conception of abstract space at all. What we normally think of as space - world-space - turns out not to be what space fundamentally is; world-space is, in Heidegger's terminology, space conceived as vorhanden. It is an objectified space founded on a more basic space-of-action.

Since Heidegger thinks that space-of-action is the condition for world-space, he must explain the former without appealing to the latter. Heidegger's task then is to describe the space-of-action without presupposing such world-space and the derived concept of a system of spatial coordinates. However, this is difficult because all our usual linguistic expressions for describing spatial relations presuppose world-space. For example, how can one talk about the "distance between you and me" without presupposing some sort of metric, i.e., without presupposing an objective access to the relation? Our spatial notions such as "distance," "location," etc. must now be re-described from a standpoint within the spatial relation of self (Dasein) to the things dealt with. This problem is what motivates Heidegger to invent his own terminology and makes his discussion of space awkward. In what follows I will try to use ordinary language whenever possible to explain his principal ideas.

The space-of-action has two aspects: regions (space as Zuhandenheit) and Dasein's spatiality (space as Existentiale). The sort of space we deal within our daily activity is "functional" or zuhanden, and Heidegger's term for it is "region." The places we work and live-the office, the park, the kitchen, etc.-all having different regions that organizes our activities and conceptualized “equipment.” My desk area as my work region has a computer, printer, telephone, books, etc., in their appropriate “places,” according to the spatiality of the way in which I work. Regions differ from space viewed as a "container"; the latter notion lacks a "referential" organization with respect to our context of activities. Heidegger wants to claim that referential functionality is an inherent feature of space itself, and not just a "human" characteristic added to a container-like space.

In our activity, how do we specifically stand with respect to functional space? We are not "in" space as things are, but we do exist in some spatially salient manner. What Heidegger is trying to capture is the difference between the nominal expression "we exist in space" and the adverbial expression "we exist spatially." He wants to describe spatiality as a mode of our existence rather than conceiving space as an independent entity. Heidegger identifies two features of Dasein's spatiality - "de-severance" (Ent-fernung) and "directionality" (Ausrichtung).

De-severance describes the way we exist as a process of spatial self-determination by “making things available” to ourselves. In Heidegger's language, in making things available we "take in space" by "making the farness vanish" and by "bringing things close"

We are not simply contemplative beings, but we exist through concretely acting in the world - by reaching for things and going to places. When I walk from my desk area into the kitchen, I am not simply alternating locations from points ‘A’ to ‘B’ in an arena-like space, but I am “taking in space” as I move, continuously making the “farness” of the kitchen “vanish,” as the shifting spatial perspectives are opened as I go along.

This process is also inherently "directional." Every de-severing is aimed toward something or in a certain direction that is determined by our concern and by specific regions. I must always face and move in a certain direction that is dictated by a specific region. If I want to get a glass of ice tea, instead of going out into the yard, I face toward the kitchen and move in that direction, following the region of the hallway and the kitchen. Regions determine where things belong, and our actions are coordinated in directional ways accordingly.

De-severance, directionality, and regionality are three ways of describing the spatiality of a unified Being-in-the-world. As aspects of Being-in-the-world, these spatial modes of being are equiprimordial.9 10 Regions "refer" to our activities, since they are established by our ways of being and our activities. Our activities, in turn, are defined in terms of regions. Only through the region can our de-severance and directionality are established. Our object of concern always appears in a certain context and place, in a certain direction. It is because things appear in a certain direction and in their places “there” that we have our “here.” We orient ourselves and organize our activities, always within regions that must already be given to us.

Heidegger's analysis of space does not refer to temporal aspects of Being-in-the-world, even though they are presupposed. In the second half of Being and Time he explicitly turns to the analysis of time and temporality in a discussion that is significantly more complex than the earlier account of spatiality. Heidegger makes the following five distinctions between types of time and temporality: (1) The ordinary or "vulgar" conception of time; this is time conceived as Vorhandenheit. (2) World-time; this is time as Zuhandenheit. Dasein's temporality is divided into three types: (3) Dasein's inauthentic (uneigentlich) temporality, (4) Dasein's authentic (eigentlich) temporality, and (5) originary temporality or “temporality as such.” The analyses of the vorhanden and zuhanden modes of time are interesting, but it is Dasein's temporality that is relevant to our discussion, since it is this form of time that is said to be founding for space. Unfortunately, Heidegger is not clear about which temporality plays this founding role.

We can begin by excluding Dasein's inauthentic temporality. This mode of time refers to our unengaged, "average" way in which we regard time. It is the “past we forget” and the “future we expect,” all without decisiveness and resolute understanding. Heidegger seems to consider that this mode of a temporality is the temporal dimension of de-severance and directionality, since de-severance and directionality deal only with everyday actions. As such, is the inauthenticity founded within a temporality that must in themselves be set up in an authentic basis of some sort. The two remaining candidates for the foundation are Dasein's authentic temporality and originary temporality.

Dasein's authentic temporality is the "resolute" mode of temporal existence. An authentic temporality is realized when Dasein becomes aware of its own finite existence. This temporality has to do with one's grasp of his or her own life as a whole from one's own unique perspective. Life gains meaning as one's own life-project, bounded by the sense of one's realization that he or she is not immortal. This mode of time appears to have a normative function within Heidegger's theory. In the second half of BT he often refers to inauthentic or "everyday" mode of time as lacking some primordial quality which authentic temporality possesses.

In contrast, an originary temporality is the formal structure of Dasein's temporality itself. In addition to its spatial Being-in-the-world, Dasein also exists essentially as "projection." Projection is oriented toward the future, and this coming orientation regulates our concern by constantly realizing various possibilities. A temporality is characterized formally as this dynamic structure of "a future that makes present in the process of having been." Heidegger calls the three moments of temporality - the future, the present, and the past - the three ecstasies of the temporality. This mode of time is not normative but rather formal or neutral; as Blattner argues, the temporal features that constitute Dasein's temporality describe both inauthentic and authentic temporalities.

There are some passages that indicate that authentic temporality is the primary manifestation of the temporality, because of its essential orientation toward the future. For instance, Heidegger states that "temporality first showed itself in anticipatory resoluteness." Elsewhere, he argues that "the ‘time’ which is accessible to Dasein's common sense is not primordial, but arises rather from authentic temporality." In these formulations, authentic to the temporality is said to find of other inauthentic modes. According to Blattner, this is "by far the most common" interpretation of the status of authentic time.

However, to argue with Blattner and Haar, in that there are far more passages where Heidegger considers an originary temporality as distinct from authentic temporality, and founding for it and for Being-in-the-world as well. Here are some examples: The temporality has different possibilities and different ways of temporalizing itself. The basic possibilities of existence, the authenticity and inauthenticity of Dasein, are grounded ontologically on possible temporalizations of temporality. Time is primordial as the temporalizing of a temporality, and as such it makes possible the Constitution of the structure of care.

Heidegger's conception seems to be that it is because we are fundamentally temporal - having the formal structure of ecstatic-horizontals unity - that we can project, authentically or inauthentically, our concernful dealings in the world and exist as Being-in-the-world. It is on this account that temporality is said to found spatiality.

Since Heidegger uses the term "temporality" rather than "an authentic temporality" whenever the founding relation is discussed between space and time, I will begin the following analysis by assuming that it is originary temporality that founds Dasein's spatiality. On this assumption two interpretations of the argument are possible, but both are unsuccessful given his phenomenological framework.

The principal argument, entitled "The Temporality of the Spatiality that is Characteristic of Dasein." Heidegger begins the section with the following remark: Though the expression `temporality' does not signify what one understands by "time" when one talks about `space and time', nevertheless spatiality seems to make up another basic attribute of Dasein corresponding to temporality. Thus with Dasein's spatiality, existential-temporal analysis seems to come to a limit, so that this entity that we call "Dasein," must be considered as `temporal' `and' as spatial coordinately.

Accordingly, Heidegger asks, "Has our existential-temporal analysis of Dasein thus been brought to a halt . . . by the spatiality that is characteristic of Dasein . . . and Being-in-the-world?" His answer is no. He argues that since "Dasein's constitution and its ways to be are possible ontologically only on the basis of temporality," and since the "spatiality that is characteristic of Dasein . . . belongs to Being-in-the-world," it follows that "Dasein's specific spatiality must be grounded in temporality."

Heidegger's claim is that the totality of regions-de-severance-directionality can be organized and re-organized, "because Dasein as temporality is ecstatic-horizontals in its Being." Because Dasein exists futurely as "for-the-sake-of-which," it can discover regions. Thus, Heidegger remarks: "Only on the basis of its ecstatic-horizontals temporality is it possible for Dasein to break into space."

However, in order to establish that temporality founds spatiality, Heidegger would have to show that spatiality and temporality must be distinguished in such a way that temporality not only shares a content with spatiality but also has additional content as well. In other words, they must be truly distinct and not just analytically distinguishable. But what is the content of "the ecstatic-horizontals constitution of temporality?" Does it have a content above and beyond Being-in-the-world? Nicholson poses the same question as follows: Is it human care that accounts for the characteristic features of a humanistic temporality? Or is it, as Heidegger says, human temporality that accounts for the characteristic features of human care, serves as their foundation? The first alternative, according to Nicholson, is to reduce temporality to care: "the specific attributes of the temporality of Dasein . . . would be in their roots not aspects of temporality but reflections of Dasein's care." The second alternative is to treat temporality as having some content above and beyond care: "the three-fold constitution of care stems from the three-fold constitution of temporality."

Nicholson argues that the second alternative is the correct reading.18 Dasein lives in the world by making choices, but "the ecstasies of temporality lies well prior to any choice . . . so our study of care introduces us to a matter whose scope outreaches care: the ecstasies of temporality itself." Accordingly, "What was able to make clear is that the reign of temporal ecstasies over the choices we make accords with the place we occupy as finite beings in the world."

But if Nicholson's interpretation is right, what would be the content of "the ecstasies of the temporality itself," if not some sort of purely formal entity or condition such as Kant's "pure intuition?" But this would imply that Heidegger has left phenomenology behind and is now engaging in establishing a transcendental framework outside the analysis of Being-in-the-world, such that this formal structure founds Being-in-the-world. This is inconsistent with his initial claim that Being-in-the-world is itself foundational.

Nicholson's first alternative offers a more consistent reading. The structure of temporality should be treated as an abstraction from Dasein's Being-in-the-world, specifically from care. In this case, the content of temporality is just the past and the present and the future ways of Being-in-the-world. Heidegger's own words support this reading: "as Dasein temporalizes itself, a world is too," and "the world is neither present-at-hand nor ready-to-hand, but temporalizes itself in temporality." He also states that the zuhanden "world-time, in the rigorous sense of the existential-temporal conception of the world, belongs as itself." In this reading, "temporality temporalizing itself," "Dasein's projection," and "the temporal projections of the world" are three different ways of describing the same "happening" of Being-in-the-world, which Heidegger calls "self-directive."

However, if this is the case, then temporality does not found spatiality, except perhaps in the trivial sense that spatiality is built into the notion of care that is identified with temporality. The fulfilling contents of “temporality temporalizing itself” simply is the various openings of regions, i.e., Dasein's "breaking into space." Certainly, as Stroeker points out, it is true that "nearness and remoteness are spatially-transient phenomena and cannot be conceived without a temporal moment." But this necessity does not constitute a foundation. Rather, they are equiprimordial. The addition of temporal dimensions does indeed complete the discussion of spatiality, which abstracted from time. But this completion, while it better articulates the whole of Being-in-the-world, does not show that temporality is more fundamental.

If temporality and spatiality are equiprimordial, then all of the supposedly founding relations between temporality and spatiality could just as well be reversed and still hold true. Heidegger's view is that "because Dasein as temporality is ecstatic-horizontals in its Being, it can take along with it a space for which it has made room, and it can do so farcically and constantly." But if Dasein is essentially a factical projection, then the reverse should also be true. Heidegger appears to have assumed the priority of temporality over spatiality perhaps under the influence of Kant, Husserl, or Dilthey, and then based his analyses on that assumption.

However, there may still be a way to save Heidegger's foundational project in terms of authentic temporality. Heidegger never specifically mentions the authenticity of temporalities, since he suggests earlier that the primary manifestation of temporality is authentic temporality, such a reading may perhaps be justified. This reading would treat the whole spatio-temporal structure of Being-in-the-world. The resoluteness of authentic temporality, arising out of Dasein's own "Being-towards-death," would supply a content to temporality above and beyond everyday involvements.

Heidegger is said to have its foundations in resoluteness, Dasein determines its own Situation through anticipatory resoluteness, which includes particular locations and involvements, i.e., the spatiality of Being-in-the-world. The same set of circumstances could be transformed into a new situation with different significance, if Dasein chooses resolutely to bring that about. Authentic temporality in this case can be said to found spatiality, since Dasein's spatiality is determined by resoluteness. This reading moreover enables Heidegger to construct a hierarchical relation between temporality and spatiality within Being-in-the-world than going outside of it to a formal transcendental principle, since the choice of spatiality is grasped phenomenological ly in terms of the concrete experience of decision.

Moreover, one might argue that according to Heidegger one's own grasp of "death" is uniquely a temporal mode of existence, whereas there is no such weighty conception involving spatiality. Death is what makes Dasein "stands before itself in its own most potentiality-for-Being." Authentic Being-towards-death is a "Being towards a possibility - indeed, towards a distinctive possibility of Dasein itself." One could argue that notions such as "potentiality" and "possibility" are distinctively temporal, nonspatial notions. So "Being-towards-death," as temporal, appears to be much more ontologically "fundamental" than spatiality.

However, Heidegger is not yet out of the woods. I believe that labelling the notions of anticipatory resoluteness, Being-towards-death, potentiality, and possibility specifically as temporal modes of being (to the exclusion of spatiality) begs the question. Given Heidegger's phenomenological framework, why assume that these notions are only temporal (without spatial dimensions)? If Being-towards-death, potentiality-for-Being, and possibilities were "purely" temporal notions, what phenomenological sense can we make of such abstract conceptions, given that these are manifestly our modes of existence as bodily beings? Heidegger cannot have in mind such an abstract notion of time, if he wants to treat of the proposed authenticity that corrugate of temporality is the meaning of care. It would seem more consistent with his theoretical framework to say that Being-towards-death is a rich spatio-temporal mode of being, given that Dasein is Being-in-the-world.

Furthermore, the interpretation that defines resoluteness as uniquely temporal suggests too much of a voluntaristic or subjectivistic notion of the self that controls its own Being-in-the-world from the standpoint of its future. This would drive a wedge between the self and its Being-in-the-world, thereby creating a temporal "inner self" which can decide its own spatiality. However, if Dasein is Being-in-the-world as Heidegger claims, then all of Dasein's decisions should be viewed as concretely grounded in Being-in-the-world. If so, spatiality must be an essential constitutive element.

Hence, authentic temporality, if construed narrowly as the mode of temporality, at first appears to be able to found spatiality, but it also commits Heidegger either to an account of time that is too abstract, or to the notion of the self far more like Sartre's than his own. What is lacking in Heidegger's theory that generates this sort of difficulty is a developed conception of Dasein as a lived body - a notion more fully developed by Merleau-Ponty.

The elements of a more consistent interpretation of authentic temporality are present in Being and Time. This interpretation incorporates a view of "authentic spatiality" in the notion of authentic temporality. This would be Dasein's resolutely grasping its own spatio-temporal finitude with respect to its place and its world. Dasein is born at a particular place, lives in a particular place, dies in a particular place, all of which it can relate to in an authentic way. The place Dasein lives are not a place of anonymous involvements. The place of Dasein must be there where its own potentiality-for-Being is realized. Dasein's place is thus a determination of its existence. Had Heidegger developed such a conception more fully, he would have seen that temporality is equiprimordial with thoroughly spatial and contextual Being-in-the-world. They are distinguishable but equally fundamental ways of emphasizing our finitude.

The internalized tensions within his theory eventually led Heidegger to reconsider his own positions. In his later period, he explicitly develops what may be viewed as a conception of authentic spatiality. For instance, in "Building Dwelling Thinking," Heidegger states that Dasein's relations to locations and to spaces inheres in dwelling, and dwelling is the basic character of our Being. The notion of dwelling expresses an affirmation of spatial finitude. Through this affirmation one acquires a proper relation to one's environment.

But the idea of dwelling must accede to the fact that has already been discussed in Being and Time, regarding the term "Being-in-the-world," Heidegger explains that the word "in" is derived from "in-an" - to "reside," "habits are," "to dwell." The emphasis on "dwelling" highlights the essentially "worldly" character of the self.

Thus from the beginning Heidegger had a conception of spatial finitude, but this fundamental insight was undeveloped because of his ambition to carry out the foundational project that favoured time. From the 1930's on, as Heidegger abandons the foundational project focussing on temporality, the conception of authentic spatiality comes to the fore. For example, in Discourse on Thinking Heidegger considers the spatial character of Being as "that-which-regions (die Gegnet)." The peculiar expression is a re-conceptualization of the notion of "region" as it appeared in Being and Time. Region is given an active character and defined as the "openness that surrounds us" which "comes to meet us." By giving it an active character, Heidegger wants to emphasize that region is not brought into being by us, but rather exists in its own right, as that which expresses our spatial existence. Heidegger states that "one needs to understand ‘resolve’ (Entschlossenheit) as it is understood in Being and Time: as the opening of man [Dasein] particularly undertaken by him for openness, . . . which we think of as that-which-regions." Here Heidegger is asserting an authentic conception of spatiality. The finitude expressed in the notion of Being-in-the-world is thus transformed into an authentic recognition of our finite worldly existence in later writings.

Meanwhile, it seems that it is nonetheless, natural to combine this close connection with conclusions by proposing an account of self-consciousness, as to the capacity to think “I”-thoughts that are immune to error through misidentification, though misidentification varies with the semantics of the “self” - this would be a redundant account of self-consciousness. Once we have an account of what it is to be capable of thinking “I”-thoughts, we will have explained everything distinctive about self-consciousness. It stems from the thought that what is distinctive about “I”-thoughts are that they are either themselves immune to error or they rest on further “I” -Thoughts that are immune in that way.

Once we have an account of what it is to be capable of thinking thoughts that are immune to error through misidentification, we will have explained everything about the capacity to think “I”-thoughts. As it would to claim of deriving from the thought that immunity to error through misidentification depends on the semantics of the “self.”

Once, again, that when we have an account of the semantics in that we will have explained everything distinctive about the capacity to think thoughts that are immune to error through misidentification.

The suggestion is that the semantics of “self-ness” will explain what is distinctive about the capacity to think thoughts immune to error through misidentification. Semantics alone cannot be expected to explain the capacity for thinking thoughts. The point in fact, that all that there is to the capacity of think thoughts that are immune tp error is the capacity to think the sort of thought whose natural linguistic expression involves the “self,” where this capacity is given by mastery of the semantics of “self-ness.” Yielding, to explain what it is to master the semantics of “self-ness,” especially to think thoughts immune to error through misidentification.

On this view, the mastery of the semantics of “self-ness” may be construed as for the single most important explanation in a theory of “self-consciousness.”

Its quickened reformulation might be put to a defender of “redundancy” or the deflationary theory is how mastery of the semantics of “self-ness” can make sense of the distinction between “self-ness contents” that are immune to error through misidentification and the “self contents” that lack such immunity. However, this is only an apparent difficulty when one remembers that those of the “selves” content is immune to error through misidentification, because, those employing ‘”I” as object, were able in having to break down their component elements. The identification component and the predication components that for which if the composite identification components of each are of such judgements that mastery of the semantics of “self-regulatory” content must be called upon to explain. Identification component are, of course, immune to error through misidentification.

It is also important to stress how the redundancy and the deflationary theory of self-consciousness, and any theory of self-consciousness that accords a serious role in self-consciousness to mastery of the semantics of the “self-ness,” are motivated by an important principle that has governed much of the development of analytical philosophy. The principle is the principle that the analysis of thought can only continue thought, the philosophical analysis of language such that we communicate thoughts by means of language because we have an implicit understanding of the workings of language, that is, of the principle governing the use of language: It is these principles, which relate to what is open to view and mind other that via the medium of language, which endow our sentences with the senses that they carry. In order to analyse thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp.

Moreover, at the core of the notion of broad self-consciousness is the recognition of what consciousness is the recognition of what developmental psychologist’s call “self-world dualism.” Any subject properly described as self-conscious must be able to register the distinction between himself and the world, of course, this is a distinction that can be registered in a variety of way. The capacity for self-ascription of thoughts and experiences, in combination with the capacity to understand the world as a spatial and causally structured system of mind-independent objects, is a high-level way of registering of this distinction.

Consciousness of objects is closely related to sentience and to being awake. It is (at least) being in somewhat of a distinct informational and behavioural intention where its responsive state is for one's condition as played within the immediateness of environmental surroundings. It is the ability, for example, to process and act responsively to information about food, friends, foes, and other items of relevance. One finds consciousness of objects in creatures much less complex than human beings. It is what we (at any rate first and primarily) have in mind when we say of some person or animal as it is coming out of general anaesthesia, ‘It is regaining consciousness’ as consciousness of objects is not just any form of informational access to the world, but the knowing about and being conscious of, things in the world.

We are conscious of our representations when we are conscious, not (just) of some object, but of our representations: ‘I am seeing [as opposed to touching, smelling, tasting] and seeing clearly [as opposed too dimly].’ Consciousness of our own representations it is the ability to process and act responsively to information about oneself, but it is not just any form of such informational access. It is knowing about, being conscious of, one's own psychological states. In Nagel's famous phrase (1974), when we are conscious of our representations, it is ‘like something’ to have them. If, that which seems likely, there are forms of consciousness that do not involve consciousness of objects, they might consist in consciousness of representations, though some theorists would insist that this kind of consciousness be not of representations either (via representations, perhaps, but not of them).

The distinction just drawn between consciousness of objects and consciousness of our representations of objects may seem similar to Form's (1995) contributes of a well-known distinction between P- [phenomenal] and A- [access] consciousness. Here is his definition of ‘A-consciousness’: "A state is A-conscious if it is poised for direct control of thought and action." He tells us that he cannot define ‘P-consciousness’ in any "remotely non-circular way" but will use it to refer to what he calls "experiential properties,” what it is like to have certain states. Our consciousness of objects may appear to be like A-consciousness. It is not, however, it is a form of P-consciousness. Consciousness of an object is - how else can we put it? - consciousness of the object. Even if consciousness is just informational excess of a certain kind (something that Form would deny), it is not all form of informational access and we are talking about conscious access here. Recall the idea that it is like something to have a conscious state. Other closely related ideas are that in a conscious state, something appears to one, that conscious states have a ‘felt quality’. A term for all this is phenomenology: Conscious states have a phenomenology. (Thus some philosophers speak of phenomenal consciousness here.) We could now state the point we are trying to make this way. If I am conscious of an object, then it is like something to have that object as the content of a representation.

Historically, Heidegger' theory of spatiality distinguishes three different types of space: (1) world-space, (2) regions (Gegend), and (3) Dasein's spatiality. What Heidegger calls "world-space" is space conceived as an “arena” or “container” for objects. It captures both our ordinary conception of space and theoretical space - in particular absolute space. Chairs, desks, and buildings exist “in” space, but world-space is independent of such objects, much like absolute space “in which” things exist. However, Heidegger thinks that such a conception of space is an abstraction from the spatializing conduct of our everyday activities. The things that we deal with are near or far relative to us; according to Heidegger, this nearness or farness of things is how we first become familiar with that which we (later) represent to ourselves as "space." This familiarity is what renders the understanding of space (in a "container" metaphor or in any other way) possible. It is because we act spatially, going to places and reaching for things to use, that we can even develop a conception of abstract space at all. What we normally think of as space - world-space - turns out not to be what space fundamentally is; world-space is, in Heidegger's terminology, space conceived as vorhanden. It is an objectified space founded on a more basic space-of-action.

Since Heidegger thinks that space-of-action is the condition for world-space, he must explain the former without appealing to the latter. Heidegger's task then is to describe the space-of-action without presupposing such world-space and the derived concept of a system of spatial coordinates. However, this is difficult because all our usual linguistic expressions for describing spatial relations presuppose world-space. For example, how can one talk about the "distance between you and me" without presupposing some sort of metric, i.e., without presupposing an objective access to the relation? Our spatial notions such as "distance," "location," etc. must now be re-described from a standpoint within the spatial relation of self (Dasein) to the things dealt with. This problem is what motivates Heidegger to invent his own terminology and makes his discussion of space awkward. In what follows I will try to use ordinary language whenever possible to explain his principal ideas.

The space-of-action has two aspects: regions (space as Zuhandenheit) and Dasein's spatiality (space as Existentiale). The sort of space we deal with in our daily activity is "functional" or zuhanden, and Heidegger's term for it is "region." The places we work and live-the office, the park, the kitchen, etc.-all have different regions that organize our activities and contexualize “equipment.” My desk area as my work region has a computer, printer, telephone, books, etc., in their appropriate “places,” according to the spatiality of the way in which I work. Regions differ from space viewed as a "container"; the latter notion lacks a "referential" organization with respect to our context of activities. Heidegger wants to claim that referential functionality is an inherent feature of space itself, and not just a "human" characteristic added to a container-like space.

In our activity, how do we specifically stand with respect to functional space? We are not "in" space as things are, but we do exist in some spatially salient manner. What Heidegger is trying to capture is the difference between the nominal expression "we exist in space" and the adverbial expression "we exist spatially." He wants to describe spatiality as a mode of our existence rather than conceiving space as an independent entity. Heidegger identifies two features of Dasein's spatiality - "de-severance" (Ent-fernung) and "directionality" (Ausrichtung).

De-severance describes the way we exist as a process of spatial self-determination by “making things available” to ourselves. In Heidegger's language, in making things available we "take in space" by "making the farness vanish" and by "bringing things close"

We are not simply contemplative beings, but we exist through concretely acting in the world - by reaching for things and going to places. When I walk from my desk area into the kitchen, I am not simply changing locations from point A to B in an arena-like space, but I am “taking in space” as I move, continuously making the “farness” of the kitchen “vanish,” as the shifting spatial perspectives are opened as I go along.

This process is also inherently "directional." Every de-severing is aimed toward something or in a certain direction that is determined by our concern and by specific regions. I must always face and move in a certain direction that is dictated by a specific region. If I want to get a glass of ice tea, instead of going out into the yard, I face toward the kitchen and move in that direction, following the region of the hallway and the kitchen. Regions determine where things belong, and our actions are coordinated in directional ways accordingly.

De-severance, directionality, and regionality are three ways of describing the spatiality of a unified Being-in-the-world. As aspects of Being-in-the-world, these spatial modes of being are equiprimordial.9 10 Regions "refer" to our activities, since they are established by our ways of being and our activities. Our activities, in turn, are defined in terms of regions. Only through the region can our de-severance and directionality be established. Our object of concern always appears in a certain context and place, in a certain direction. It is because things appear in a certain direction and in their places “there” that we have our “here.” We orient ourselves and organize our activities, always within regions that must already be given to us.

Heidegger's analysis of space does not refer to temporal aspects of Being-in-the-world, even though they are presupposed. In the second half of Being and Time he explicitly turns to the analysis of time and temporality in a discussion that is significantly more complex than the earlier account of spatiality. Heidegger makes the following five distinctions between types of time and temporality: (1) the ordinary or "vulgar" conception of time; this is time conceived as Vorhandenheit. (2) world-time; this is time as Zuhandenheit. Dasein's temporality is divided into three types: (3) Dasein's inauthentic (uneigentlich) temporality, (4) Dasein's authentic (eigentlich) temporality, and (5) originary temporality or “temporality as such.” The analyses of the vorhanden and zuhanden modes of time are interesting, but it is Dasein's temporality that is relevant to our discussion, since it is this form of time that is said to be founding for space. Unfortunately, Heidegger is not clear about which temporality plays this founding role.

We can begin by excluding Dasein's inauthentic temporality. This mode of time refers to our unengaged, "average" way in which we regard time. It is the “past we forget” and the “future we expect,” all without decisiveness and resolute understanding. Heidegger seems to consider that this mode of temporality is the temporal dimension of de-severance and directionality, since de-severance and directionality deal only with everyday actions. As such, inauthentic temporality must itself be founded in an authentic basis of some sort. The two remaining candidates for the foundation are Dasein's authentic temporality and originary temporality.

Dasein's authentic temporality is the "resolute" mode of temporal existence. An authentic temporality is realized when Dasein becomes aware of its own finite existence. This temporality has to do with one's grasp of his or her own life as a whole from one's own unique perspective. Life gains meaning as one's own life-project, bounded by the sense of one's realization that he or she is not immortal. This mode of time appears to have a normative function within Heidegger's theory. In the second half of BT he often refers to inauthentic or "everyday" mode of time as lacking some primordial quality which authentic temporality possesses.

In contrast, to the originary temporality, for which the formal structure of Dasein's temporality itself is grounded to its spatial Being-in-the-world, Dasein also exists essentially as "projection." Projection is oriented toward the future, and this outcome orientation regulates our concern by constantly realizing various possibilities. Temporality is characterized formally as this dynamic structure of "a future that makes present in the process of having been." Heidegger calls the three moments of temporality - the future, the present, and the past - the three ecstasies of temporality. This mode of time is not normative but rather formal or neutral; as Blattner argues, the temporal features that constitute Dasein's temporality describe both inauthentic and authentic temporality.

There are some passages that indicate that authentic temporality is the primary manifestation of the temporality, because of its essential orientation toward the future. For instance, Heidegger states that "temporality first showed itself in anticipatory resoluteness." Elsewhere, he argues that "the ‘time’ which is accessible to Dasein's common sense is not primordial, but arises rather from authentic temporality." In these formulations, the authentic temporality is said to found other inauthentic modes. According to Blattner, this is "by far the most common" interpretation of the status of authentic time.

However, to ague with Blattner and Haar, in that there are far more passages where Heidegger considers an originary temporality as distinct from authentic temporality, and founding for it and for Being-in-the-world as well. Here are some examples: A temporality has different possibilities and different ways of temporalizing itself. The basic possibilities of existence, the authenticity and inauthenticity of Dasein, are grounded ontologically on possible temporalizations of the temporality. Time is primordial as the temporalizing of temporality, and as such it makes possible the Constitution of the structure of care.

Heidegger's conception seems to be that it is because we are fundamentally temporal - having the formal structure of ecstatic-horizontal unity - that we can project, authentically or in authentically, our concernful dealings in the world and exist as Being-in-the-world. It is on this account that temporality is said to found spatiality. Nicholson's first alternative offers a more consistent reading. The structure of temporality should be treated as an abstraction from Dasein's Being-in-the-world, specifically from care. In this case, the content of temporality is just the past and the present and the future ways of Being-in-the-world. Heidegger's own words support this reading: "as Dasein temporalizes itself, a world is too," and "the world is neither present-at-hand nor ready-to-hand, but temporalizes itself in temporality." He also states that the zuhanden "world-time, in the rigorous sense of the existential-temporal conception of the world, belongs to temporality itself." In this reading, "temporality temporalizing itself," "Dasein's projection," and "the temporal projection of the world" are three different ways of describing the same "happening" of Being-in-the-world, which Heidegger calls "self-directive."

However, if this is the case, then the temporality does not found spatiality, except perhaps in the trivial sense that spatiality is built into the notion of care that is identified with a temporality. The sustaining of "temporality temporalizing itself" simply is the various openings of regions, i.e., Dasein's "breaking into space." Certainly, as Stroeker points out, it is true that "nearness and remoteness are spatio-temporal phenomena and cannot be conceived without a temporal moment." But this necessity does not constitute a foundation. Rather, they are equiprimordial. The addition of temporal dimensions does indeed complete the discussion of spatiality, which abstracted from time. But this completion, while it better articulates the whole of Being-in-the-world, does not show that temporality is more fundamental.

If temporality and spatiality are equiprimordial, then all of the supposedly founding relations between temporality and spatiality could just as well be reversed and still hold true. Heidegger's view is that "because Dasein as temporality is ecstatic-horizontals in its Being, it can take along with it a space for which it has made room, and it can do so farcically and constantly." But if Dasein is essentially a factical projection, then the reverse should also be true. Heidegger appears to have assumed the priority of temporality over spatiality perhaps under the influence of Kant, Husserl, or Dilthey, and then based his analyses on that assumption.

However, there may still be a way to save Heidegger's foundational project in terms of authentic temporality. Heidegger never specifically mentions authentic temporality, since he suggests earlier that the primary manifestation of temporality is authentic temporality, such a reading may perhaps be justified. This reading would treat the whole spatio-temporal structure of Being-in-the-world. The resoluteness of authenticated temporality, arising out of Dasein's own "Being-towards-death," would supply a content to temporality above and beyond everyday involvements.

Heidegger is said to have its foundations in resoluteness, Dasein determines its own Situation through anticipatory resoluteness, which includes particular locations and involvements, i.e., the spatiality of Being-in-the-world. The same set of circumstances could be transformed into a new situation with different significance, if Dasein chooses resolutely to bring that about. An authentic temporality in this case can be said to found spatiality, since Dasein's spatiality is determined by resoluteness. This reading moreover enables Heidegger to construct a hierarchical relation between temporality and spatiality within Being-in-the-world rather than going outside of it to a formal transcendental principle, since the choice of spatiality is grasped phenomenologically in terms of the concrete experience of decision.

Moreover, one might argue that according to Heidegger one's own grasp of "death" is uniquely a temporal mode of existence, whereas there is no such weighty conception involving spatiality. Death is what compels Dasein to "stand before itself in its own most potentiality-for-Being." Authentic Being-towards-death is a "Being towards a possibility - indeed, towards a distinctive possibility of Dasein itself." One could argue that notions such as "potentiality" and "possibility" are distinctively temporal, nonspatial notions. So "Being-towards-death," as temporal, appears to be much more ontologically "fundamental" than spatiality.

However, Heidegger is not yet out of the woods. I believe that labelling the notions of anticipatory resoluteness, Being-towards-death, potentiality, and possibility specifically as temporal modes of being (to the exclusion of spatiality) begs the question. Given Heidegger's phenomenological framework, why assume that these notions are only temporal (without spatial dimensions)? If Being-towards-death, potentiality-for-Being, and possibility were "purely" temporal notions, what phenomenological sense can we make of such abstract conceptions, given that these are manifestly our modes of existence as bodily beings? Heidegger cannot have in mind such an abstract notion of time, if he wants to treat authentic temporality as the meaning of care. It would seem more consistent with his theoretical framework to say that Being-towards-death is a rich spatio-temporal mode of being, given that Dasein is Being-in-the-world.

Furthermore, the interpretation that defines resoluteness as uniquely temporal suggests too much of a voluntaristic or subjectivistic notion of the self that controls its own Being-in-the-world from the standpoint of its future. This would drive a wedge between the self and its Being-in-the-world, thereby creating a temporal "inner self" which can decide its own spatiality. However, if Dasein is Being-in-the-world as Heidegger claims, then all of Dasein's decisions should be viewed as concretely grounded in Being-in-the-world. If so, spatiality must be an essential constitutive element.

Hence, authentic temporality, if construed narrowly as the mode of temporality, at first appears to be able to found spatiality, but it also commits Heidegger either to an account of time that is too abstract, or to the notion of the self far more like Sartre's than his own. What is lacking in Heidegger's theory that generates this sort of difficulty is a developed conception of Dasein as a lived body - a notion more fully developed by Merleau-Ponty.

The elements of a more consistent interpretation of an authentic temporality are present in Being and Time. This interpretation incorporates a view of "authentic spatiality" in the notion of its authenticated temporality. This would be Dasein's resolutely grasping its own spatio-temporal finitude with respect to its place and its world. Dasein is born at a particular place, lives in a particular place, dies in a particular place, all of which it can by its relation to in an authenticated process. The place Dasein lives is not a place of anonymous involvements. The place of Dasein must be there where its own potentiality-for-Being is realized. Dasein's place is thus a determination of its existence. Had Heidegger developed such a conception more fully, he would have seen that temporality is equiprimordial with thoroughly spatial and contextual Being-in-the-world. They are distinguishable but equally fundamental ways of emphasizing our finitude.

The internalized tensions within his theory leads Heidegger to reconsider his own positions. In his later period, he explicitly develops what may be viewed as a conception of authentic spatiality. For instance, in "Building Dwelling Thinking," Heidegger states that Dasein's relations to locations and to spaces inheres in dwelling, and dwelling is the basic character of our Being. The notion of dwelling expresses an affirmation of spatial finitude. Through this affirmation one acquires a proper relation to one's environment.

But the idea of dwelling is in fact already discussed in Being and Time, regarding the term "Being-in-the-world," Heidegger explains that the word "in" is derived from "in-an" - to "reside," "habit are," "to dwell." The emphasis on "dwelling" highlights the essentially "worldly" character of the self.

Thus from the beginning Heidegger had a conception of spatial finitude, but this fundamental insight was undeveloped because of his ambition to carry out the foundational project that favoured time. From the 1930's on, as Heidegger abandons the foundational project focussing on temporality, the conception of authentic spatiality comes to the fore. For example, in Discourse on Thinking Heidegger considers the spatial character of Being as "that-which-regions (die Gegnet)." The peculiar expression is a re-conceptualization of the notion of "region" as it appeared in Being and Time. Region is given an active character and defined as the "openness that surrounds us" which "comes to meet us." By giving it an active character, Heidegger wants to emphasize that region is not brought into being by us, but rather exists in its own right, as that which expresses our spatial existence. Heidegger states that "one needs to understand ‘resolve’ (Entschlossenheit) as it is understood in Being and Time: as the opening of man [Dasein] particularly undertaken by him for openness, . . . which we think of as that-which-regions." Here Heidegger is asserting an authentic conception of spatiality. The finitude expressed in the notion of Being-in-the-world is thus transformed into an authentic recognition of our finite worldly existence in later writings.

The return to the conception of spatial finitude in the later period shows that Heidegger never abandoned the original insight behind his conception of Being-in-the-world. But once committed to this idea, it is hard to justify singling out an aspect of the self -temporality - as the foundation for the rest of the structure. All of the Existentiale and zuhanden modes, which constitute the whole of Being-in-the-world, are equiprimordial, each mode articulating different aspects of a unified whole. The preference for temporality as the privileged meaning of existence reflects the Kantian residue in Heidegger's early doctrine that he later rejected as still excessively subjectivistic.

Meanwhile, it seems that it is nonetheless, natural to combine this close connection with conclusions by proposing an account of self-consciousness, as to the capacity to think “I”-thoughts that are immune to error through misidentification, though misidentification varies with the semantics of the “self” - this would be a redundant account of self-consciousness. Once we have an account of what it is to be capable of thinking “I”-thoughts, we will have explained everything distinctive about self-consciousness. It stems from the thought that what is distinctive about “I”-thoughts are that they are either themselves immune to error or they rest on further “I” -Thoughts that are immune in that way.

Once we have an account of what it is to be capable of thinking thoughts that are immune to error through misidentification, we will have explained everything about the capacity to think “I”-thoughts. As it would to claim of deriving from the thought that immunity to error through misidentification depends on the semantics of the “self.”

Once, again, that when we have an account of the semantics in that we will have explained everything distinctive about the capacity to think thoughts that are immune to error through misidentification.

The suggestion is that the semantics of “self-ness” will explain what is distinctive about the capacity to think thoughts immune to error through misidentification. Semantics alone cannot be expected to explain the capacity for thinking thoughts. The point in fact, that all that there is to the capacity of think thoughts that are immune tp error is the capacity to think the sort of thought whose natural linguistic expression involves the “self,” where this capacity is given by mastery of the semantics of “self-ness.” Yielding, to explain what it is to master the semantics of “self-ness,” especially to think thoughts immune to error through misidentification.

On this view, the mastery of the semantics of “self-ness” may be construed as for the single most important explanation in a theory of “self-consciousness.”

Its quickened reformulation might be put to a defender of “redundancy” or the deflationary theory is how mastery of the semantics of “self-ness” can make sense of the distinction between “self-ness contents” that are immune to error through misidentification and the “self contents” that lack such immunity. However, this is only an apparent difficulty when one remembers that those of the “selves” content is immune to error through misidentification, because, those employing ‘”I” as object, were able in having to break down their component elements. The identification component and the predication components that for which if the composite identification components of each are of such judgements that mastery of the semantics of “self-regulatory” content must be called upon to explain. Identification component are, of course, immune to error through misidentification.

It is also important to stress how the redundancy and the deflationary theory of self-consciousness, and any theory of self-consciousness that accords a serious role in self-consciousness to mastery of the semantics of the “self-ness,” are motivated by an important principle that has governed much of the development of analytical philosophy. The principle is the principle that the analysis of thought can only continue thought, the philosophical analysis of language such that we communicate thoughts by means of language because we have an implicit understanding of the workings of language, that is, of the principle governing the use of language: It is these principles, which relate to what is open to view and mind other that via the medium of language, which endow our sentences with the senses that they carry. In order to analyse thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp.

Still, at the core of the notion of broad self-consciousness is the recognition of what consciousness is the recognition of what developmental psychologist’s call “self-world dualism.” Any subject properly described as self-conscious must be able to register the distinction between himself and the world, of course, this is a distinction that can be registered in a variety of way. The capacity for self-ascription of thoughts and experiences, in combination with the capacity to understand the world as a spatial and causally structured system of mind-independent objects, is a high-level way of registering of this distinction.

Consciousness of objects is closely related to sentience and to being awake. It is (at least) being in somewhat of a distinct informational and behavioural intention where its responsive state is for one's condition as played within the immediateness of environmental surroundings. It is the ability, for example, to process and act responsively to information about food, friends, foes, and other items of relevance. One finds consciousness of objects in creatures much less complex than human beings. It is what we (at any rate first and primarily) have in mind when we say of some person or animal as it is coming out of general anaesthesia, ‘It is regaining consciousness’ as consciousness of objects is not just any form of informational access to the world, but the knowing about and being conscious of, things in the world.

We are conscious of our representations when we are conscious, not (just) of some object, but of our representations: ‘I am seeing [as opposed to touching, smelling, tasting] and seeing clearly [as opposed too dimly].’ Consciousness of our own representations it is the ability to process and act responsively to information about oneself, but it is not just any form of such informational access. It is knowing about, being conscious of, one's own psychological states. In Nagel's famous phrase (1974), when we are conscious of our representations, it is ‘like something’ to have them. If, that which seems likely, there are forms of consciousness that do not involve consciousness of objects, they might consist in consciousness of representations, though some theorists would insist that this kind of consciousness be not of representations either (via representations, perhaps, but not of them).

The distinction just drawn between consciousness of objects and consciousness of our representations of objects may seem similar to Form's (1995) contributes of a well-known distinction between P- [phenomenal] and A- [access] consciousness. Here is his definition of ‘A-consciousness’: "A state is A-conscious if it is poised for direct control of thought and action." He tells us that he cannot define ‘P-consciousness’ in any "remotely non-circular way" but will use it to refer to what he calls "experiential properties,” what it is like to have certain states. Our consciousness of objects may appear to be like A-consciousness. It is not, however, it is a form of P-consciousness. Consciousness of an object is - how else can we put it? - consciousness of the object. Even if consciousness is just informational excess of a certain kind (something that Form would deny), it is not all form of informational access and we are talking about conscious access here. Recall the idea that it is like something to have a conscious state. Other closely related ideas are that in a conscious state, something appears to one, that conscious states have a ‘felt quality’. A term for all this is phenomenology: Conscious states have a phenomenology. (Thus some philosophers speak of phenomenal consciousness here.) We could now state the point we are trying to make this way. If I am conscious of an object, then it is like something to have that object as the content of a representation.

Some theorists would insist that this last statement be qualified. While such a representation of an object may provide everything that a representation has to have for its contents to be like something to me, they would urge, something more is needed. Different theorists would add different elements. For some, I would have to be aware, not just of the object, but of my representation of it. For others, I would have directorial implications that infer of the certain attentive considerations to its way or something other than is elsewhere. We cannot go into this controversy here. As, we are merely making the point that consciousness of objects is more than Form's A-consciousness.

Consciousness self involves, not just consciousness of states that it is like something to have, but consciousness of the thing that has them, i.e., of ones-self. It is the ability to process and act responsively to information about oneself, but again it is more than that. It is knowing about, being conscious of, oneself, indeed of itself as itself. And consciousness of oneself in this way it is often called consciousness of self as the subject of experience. Consciousness of oneself as oneself seems to require indexical adeptness and by preference to a special indexical ability at that, not just an ability to pick out something out but to pick something out as oneself. Human beings have such self-referential indexical ability. Whether any other creatures have, it is controversial. The leading nonhuman candidate would be chimpanzees and other primates whom they have taught enough language to use first-person pronouns.

The literature on consciousness sometimes fails to distinguish consciousness of objects, consciousness of one's own representations, and consciousness of self, or treat one three, usually consciousness of one's own representations, as actualized of its owing totality in consciousness. (Conscious states do not have objects, yet is not consciousness of a representation either. We cannot pursue that complication here.) The term ‘conscious’ and cognates are ambiguous in everyday English. We speak of someone regaining consciousness - where we mean simple consciousness of the world. Yet we also say things like, She was haphazardly conscious of what motivated her to say that - where we do not mean that she lacked either consciousness of the world or consciousness of self but rather than she was not conscious of certain things about herself, specifically, certain of her own representational states. To understand the unity of consciousness, making these distinctions is important. The reason is this: the unity of consciousness takes a different form in consciousness of self than it takes in either consciousness of one's own representations or consciousness of objects.

So what is unified consciousness? As we said, the predominant form of the unity of consciousness is being aware of several things at the same time. Intuitively, this is the notion of several representations being aspects of a single encompassing conscious state. A more informative idea can be gleaned from the way philosophers have written about unified consciousness. As emerging from what they have said, the central feature of unified consciousness is taken to be something like this unity of consciousness: A group of representational relations related to each other that to be conscious of any of them is to be conscious of others of them and of the group of them as a single group.

In order for science to be rigorous, Husserl claimed that mind must ‘intend’ itself as subject and also all its ‘means’. The task of philosophy, also, is so, that in to substantiate that science is, in fact, rigorous by clearly distinguishing, naming, and taxonomizing phenomena. What William James termed the stream of consciousness was dubbed by Husserl the system of experience. Recognizing, as James did, that consciousness is contiguous, Husserl eventually concluded that any single mental phenomenon is a moving horizon receding in all directions at once toward all other phenomena.

Interesting enough, this created an epistemological dilemma that became pervasive in the history of postmodern philosophy. the dilemma is such that if mind ‘intends’ itself as subject and objects within this mind are moving in all directions toward all other objects, how can any two minds objectively agree that they are referring to the same object? The followers of Husserl concluded that this was not possible, therefore, the prospect that two minds can objectively or inter-subjectively know the same truth is annihilated.

Ever so, that it is ironic, least of mention, that Husserl’s attempt to establish a rigorous basis for science in human consciousness served to reinforce Nietzsche’s claim that truths are evolving fictions that exist only in the subjective reality of single individuals. And it also massively reinforced the stark Cartesian division between mind and world by seeming to legitimate the view that logic and mathematical systems reside only in human subjectivity and, therefore, that there is no real or necessary correspondence of physical theories with physical reality. These views would later be embarked by Ludwig Wittgenstein and Jean-Paul Sartre.

One of Nietzsche’s fundamental contentions was that traditional value (represented primarily by Christianity) had lost their power in the lives of individuals. He expressed this in his proclamation “God is dead.” He was convinced that traditional values represented “slave morality,” such that it was the characterlogical underpinning with which succeed too weakly and resentful individually created morality. Who encouraged such behaviour as gentleness and kindness because the behaviour served their interests?

By way of introducing some of Nietzsche’s written literature, which it may as such, by inclination alone be attributively contributive that all aspiration’s are in fact the presentation of their gestural point reference. A few salient points that empower Nietzsche as the “great critic” of that tradition, in so that by some meaningfully implication, is to why this critique is potentially so powerful and yet as provocative by statements concerting the immediacy of its topic.

Although enwrapped in shrouds his guising shadow that which we can identify Nietzsche in a decisive challenge to the past, from one point of view there should be nothing too remarkably new about what Nietzsche is doing, least of mention, his style of doing so is very intriguing yet distinctive. For him, undertaking to characterized methods of analysis and criticism, under which we should feel quite familiar with, just as the extracted forms of familiarity are basic throughout which contextual matters of representation have previously been faced. He is encouraging as a new possibility for our lives of a program that has strong and obvious roots in certain forms of Romanticism. Thus, is to illustrate how the greater burden of tradition, as he is deeply connected to categorical priorities as in the finding that considerations for which create tradition.

Irish philosopher and clergyman George Berkeley set out to challenge what he saw as the atheism and skepticism inherent in the prevailing philosophy of the early 18th century. His initial publications, which asserted that no objects or matter existed outside the human mind, were met with disdain by the London intelligentsia of the day. Berkeley aimed to explain his “Immaterialist” theory, part of the school of thought known as idealism.

The German philosopher Immanuel Kant tried to solve the crisis generated by Locke and brought to a climax by Hume; his proposed solution combined elements of rationalism with elements of empiricism. He agreed with the rationalists that one can have an exact and certain opening for knowledge, but he followed the empiricists in holding that such knowledge is more informative about the structure of thought than about the worlds’ outside thought. He distinguished three kinds of knowledge, analytical deductions, for which is exact and certain but uninformative, because it makes clear only what is contained in definitions; Synthetic empirically, which conveys information about the world learned from experience, but is subject to the errors of the senses. Theoretical synthetics, which are discovered by pure intuitive certainty, are both exact and understanding. Its expressions are the necessary conditions that the mind imposes on all objects of experience. Mathematics and philosophy, according to Kant, provide this last. Since the time of Kant, one of the most frequently argued questions in philosophy has been whether or not such a thing as theoretic synthetical knowledge really exists.

Because of the diversity of positions associated with existentialism, the term is impossible to define precisely. Certain themes common to nearly all existentialist writers can, however, be identified. The term itself suggests one major theme: the stress on concrete individual existence and, consequently, on subjectivity, individual freedom, and choice.

Most philosophers since Plato have held that the highest ethical good are the same for everyone; as far as one is to approach moral perfection, one resembles other morally perfect individuals. The 19th-century Danish philosopher Søren Kierkegaard, who was the first writer to call himself existential, reacted against this tradition by insisting that the highest good for the individual are to find his or her own unique vocation. As he wrote in his journal, “I must find a truth that is true for me . . . the idea for which I can live or die.” Other existentialist writers have echoed Kierkegaard's belief that one must choose one's own way without the aid of universal, objective standards. Against the traditional view that moral choice involves an objective judgment of right and wrong, existentialists have argued that no objective, rational basis can be found for moral decisions. The 19th-century German philosopher Friedrich Nietzsche additionally contended with an individuality that must define for which situations are to count as moral situations.

All existentialists have followed Kierkegaard in stressing the importance of passionate individual action in deciding questions of both morality and truth. They have insisted, accordingly, that personal experience and acting on one's own convictions are essential in arriving at the truth. Thus, the understanding of a situation by someone involved in that situation is superior to that of a detached, objective observer. This emphasis on the perspective of the individual agent has also made existentialists suspicious of systematic reasoning. Kierkegaard, and other existentialist writers have been deliberately unsystematic in the exposition of their philosophies, preferring to express themselves in aphorisms, dialogues, parables, and other literary forms. Despite their antirationalist position, however, most existentialists cannot be said to be irrationalists in the sense of denying all validity to rational thought. They have held that rational clarity is desirable wherever possible, but that the most important questions in life are not accessible for reason and the accessible knowledge as cohered by supporting structures of scientific understanding, in that they have argued that even science is not as rational as is commonly supposed. For instance, asserted that the scientific assumption of an orderly universe is for the most part a worthwhile rationalization.

Perhaps the most prominent theme in existentialist writing is that of choice. Humanity's primary distinction, in the view of most existentialists, is the freedom to choose. Existentialists have held that human beings do not have a fixed nature, or essence, as other animals and plants do: Yet, to every human that make choices that create his or her own natures embark upon the dogma that which, in its gross effect, formulates his or hers existential decision of choice. That if, one might unduly sway to consider in having to embody the influences that persuade one’s own self to frowardly acknowledge the fact of an existence that precedes the idealization pertaining to its essences. Choice is therefore central to human existence, and it is inescapable; even the refusal to choose is a choice. Freedom of choice entails commitment and responsibility. Because individuals are free to choose their own path, existentialists have argued, they must accept the risk and responsibility of following their commitment wherever it leads.

Kierkegaard held that recognizing that one experience is spiritually crucial not only a fear of specific objects but also a feeling of general apprehension, which he called dread. He interpreted it as God's way of calling each individual to agree to a personally valid way of life. The word anxiety (German Angst) has a similarly crucial role in the work of the 20th-century German philosopher Martin Heidegger - anxiety leads to the individual's confrontation with nothingness and with the impossibility of finding ultimate justification for the choices he or she must make. In the philosophy of Sartre, the word nausea is used for the individual's recognition of the pure contingency of the universe, and the word anguish is used for the recognition of the total freedom of choice that confronts the individual at every moment.

Existentialism as a distinct philosophical and literary movement belongs to the 19th and 20th centuries. However, elements of existentialism can be found in the thought (and life) of Socrates, in the Bible, and in the work of many pre-modern philosophers and writers.

The first to anticipate the major concerns of modern existentialism was the 17th-century French philosopher Blaise Pascal. Pascal rejected the rigorous rationalism of his contemporary René Descartes, asserting, in his Pensées (1670), that a systematic philosophy that presumes to explain God and humanity is a form of pride. Like later existentialist writers, he saw human life as for paradoxes: The human self, which combines mind and body, is itself a paradox and contradiction.

Nineteenth-century Danish philosopher Søren Kierkegaard played a major role in the development of existentialist thought. Kierkegaard criticized the popular systematic method of rational philosophy advocated by German Georg Wilhelm Friedrich Hegel. He emphasized the absurdity inherent in human life and questioned how any systematic philosophy could apply to the ambiguous human condition. In Kierkegaard’s deliberately unsystematic works, he explained that each individual should attempt an intense examination of his or her own existence.

Kierkegaard, generally regarded as the founder of modern existentialism, reacted against the systematic absolute idealism of the 19th-century German philosopher Georg Wilhelm Friedrich Hegel, who claimed to have worked out a total rational understanding of humanity and history. Kierkegaard, on the contrary, stressed the ambiguity and absurdity of the human situation. The individual's response to this situation must be to live a totally committed life, and this commitment can only be understood by the individual who has made it. The individual, therefore, must always be prepared to defy the norms, least of mention, for which any if not all sociological associations that bring of some orientation, that for the sake of the higher persuasion brings the possible that implicate of a personally respective way of life. Kierkegaard ultimately advocated a “leap of faith” into a Christian way of life, which, although hard to grasp and fully in the risk of which was the only commitment he believed could save the individual from despair.

Danish religious philosopher Søren Kierkegaard rejected the all-encompassing, analytical philosophical systems of such 19th-century thinkers as German philosopher G.W.F. Hegel. Instead, Kierkegaard focussed on the choices the individual must make in all aspects of his or her life, especially the choice to maintain religious faith. The literaturized work of Fear and Trembling, 1846 and translated, 1941, Kierkegaard explored the conceptual representations of faith through which an examination of the biblical story of Abraham and Isaac, under which God demanded that Abraham show by his proving of faith by sacrificing his son.

One of the most controversial works of 19th-century philosophy, Thus Spake Zarathustra (1883-1885) articulated through Friedrich Nietzsche’s theory of the Übermensch, a term translated as “Superman” or “Overman.” The Superman was an individual who overcame what termed the “slave morality” of traditional values, and lived according to his own morality. Who also advanced his idea that “God is dead,” or that traditional morality was no longer relevant in people’s lives. In the sage Zarathustra came down from the mountain where he had spent the last ten years alone to preach to the people.

Nietzsche, who was not conversant with the functional dynamics that were the contributive peculiarities for which their premise is attributable to Kierkegaard. The influence of the subsequential existentialist thought, only through his criticism of traditional metaphysical and moral assumptions and through his espousal of tragic pessimism and the life-affirming individual will that opposes itself to the moral conformity of the majority. In contrast to Kierkegaard, whose attack on conventional morality led him to advocate a radically individualistic Christianity, proclaimed the “death of God” and went on to reject the entire Judeo-Christian moral tradition in favour of a heroic pagan ideal.

The “will” (philosophy and psychology), is the capacity to choose among alternative courses of action and to act on the choice made, particularly when the action is directed toward a specific goal or is governed by definite ideals and principles of conduct? Bestowing the consignment of willed behaviour contrasts with behaviour stemming from instinct, impulse, reflex, or habit, none, of which involves conscious choice among alternatives. Again, a consigning of willed behaviour contrasts with the vacillations manifested by alternating choices among conflicting alternatives.

Until the 20th century most philosophers conceived the will as a separate faculty with which every person is born. They differed, however, about the role of this faculty in the personality makeup. For one school of philosophers, most notably represented by the German philosopher Arthur Schopenhauer, universal will-power is the primary reality, and the individual's will forms part of it. In his view, the will dominates every other aspect of an individual's personality, knowledge, feelings, and direction in life. A contemporary form of Schopenhauer's theory is implicit in some forms of existentialism, such as the existentialist view expressed by the French philosopher Jean-Paul Sartre, which regards personality as the desire to action, and actions as they are the manifestations of the will for which gives meaning to the universe.

Most other philosophers have regarded the will as coequal or secondary to other aspects of personality. Plato believed that the psyche is divided into three parts: Reason, will, and desire, for rationalist philosophers, such as Aristotle, Thomas Aquinas, and René Descartes. The will is the agent of the rational soul in governing purely animal appetites and passions. Some empirical philosophers, such as David Hume, discount the importance of rational influences upon the will; They think of the will as ruled mainly by emotion. Evolutionary philosophers, such as Herbert Spencer, and pragmatist philosophers, such as John Dewey, conceive the will not as an innate faculty but as a product of experience evolving gradually as the mind and personality of the individual development in social interaction.

Modern psychologists tend to accept the pragmatic theory of the will. They regard the will as an aspect or quality of behaviour, than as a separate faculty. It is the whole person who wills. This act of willing is manifested by (1) the fixing of attention on distant goals and abstract standards and principles of conduct; (2) the weighing of alternative courses of action and the taking of deliberate action that seems best calculated serving specific goals and principles; (3) the inhibition of impulses and habits that might distract attention from, or otherwise conflict with, a goal or principle; and (4) perseverance against deterrents and the obstruction, that within one’s pursuit of goals or adherence is given into the characteristic principles.

The modern philosophy movements of phenomenology and existentialism have been greatly influenced by the thought of German philosopher Martin Heidegger. According to Heidegger, humankind has fallen into a crisis by taking a narrow, technological approach to the world and by ignoring the larger question of existence. People, if they wish to live authentically, must broaden their perspectives. Instead of taking their existence for granted, people should view themselves as part of Being (Heidegger's term for that which underlies all existence).

Heidegger, like Pascal and Kierkegaard, reacted against an attempt to put philosophy on a conclusive rationalistic basis - as Max Scheler (1874-1928), the German social and religious philosopher, whose work reflected the influence of the phenomenology of his countryman Edmund Husserl. Born in Munich, Scheler taught at the universities of Jena, Munich, and Cologne. In The Nature of Sympathy, 1913 translated 1970, he applied Husserl's method of detailed phenomenological description to the social emotions that relate human beings to one another - especially love and hate. This book was followed by his most famous work, Formalism in Ethics and Non-Formal Ethics of Values, 1913, and translated 1973, a two-volume study of ethics in which he criticized the formal ethical approach of the German philosopher Immanuel Kant and substituted for it a study of specific values as they directly present themselves to consciousness. Scheler converted to Roman Catholicism in 1920 and wrote On the Eternal in Man, 1921 and translated 1960, to justify his conversion, followed by an important study of the sociology of knowledge, Die Wissensformen und die Gesellschaft (Forms of Knowledge and Society, 1926). Later he rejected Roman Catholicism and developed a philosophy, based on science, in which all abstract knowledge and religious values are considered sublimations of basic human drives. This is presented in his last book, The Place of Man in the Universe, 1928 translated 1961.

Phenomenology of the 20th-century German philosopher Edmund Husserl. Heidegger argued that humanity finds itself in an incomprehensible and indifferent world. Human beings can never hope to understand why they are here; Instead, each individual must choose a goal and follow it with passionate conviction, aware of the certainty of death and the ultimate meaninglessness of one's life. Heidegger contributed to existentialist thought an original emphasis on Being and ontology and on language.

The subjects treated in Aristotle's Metaphysics (substance, causality, the nature of being, and the existence of God) fixed the content of metaphysical speculation for centuries. Among the medieval Scholastic philosophers, metaphysics were known as the “transphysical science” on the assumption that, by means of it, the scholar philosophically could make the transition from the physical world to a world beyond sense perception. The 13th-century Scholastic philosopher and theologian St. Thomas Aquinas declared that the cognition of God, through a causal study of finite sensible beings, was the aim of metaphysics. With the rise of scientific study in the 16th century the reconciliation of science and faith in God became an increasingly important problem.

The Irish-born philosopher and clergyman George Berkeley (1685-1753) argued that everything, that human beings were to conceive of exists as an idea in a mind, a philosophical focus that is idealism. Berkeley reasoned that because one cannot control one’s thoughts, they must come directly from a larger mind: That of God. In his treatise, Concerning the Principles of Human Knowledge, written in 1710, Berkeley explained why he believed that it is “impossible . . . that there should be any such thing as an outward object.”

Before the time of the German philosopher Immanuel Kant’s metaphysics was characterized by a tendency to construct theories based on deductive knowledge, that is, knowledge derived from reason alone, in the contradistinction to empirical knowledge, which is gained by reference to the facts of experience. From theoretical knowledge were deduced general propositions held to be true of all things. The method of inquiry based on deductive principles is known as rationalistic. This method may be subdivided into monism, which holds that the universe is made up of a single fundamental substance: Dualism, is nonetheless, the belief in two such substances, and pluralism, which proposes the existence of many fundamental substances.

In the 5th and 4th centuries Bc, Plato postulated the existence of a realm of Ideas that the varied objects of common experience imperfectly reflect. He maintained that these ideal Forms are not only more clearly intelligible but also more real than the transient and essentially illusory objects themselves.

George Berkeley is considered the founder of idealism, the philosophical view that all physical objects are dependent on the mind for their existence. According to Berkeley's early 18th-century writing, an object such as a table exists only if a mind is perceiving it. Therefore, objects are ideas.

Berkeley speculated that all aspects of everything of which one is conscious are reducible to the ideas present in the mind. The observer does not conjure external objects into existence, however, the true ideas of them are caused in the human mind directly by God. Eighteenth-century German philosopher Immanuel Kant greatly refined idealism through his critical inquiry into what he believed to be the limit of possible knowledge. Kant held that all that can be known of things is the way in which they appear in experience, there is no way of knowing what they are substantially in themselves. He also held, however, that the fundamental principles of all science are essentially grounded in the constitution of the mind than being derived from the external world.

George Berkeley, argued, that all naturalized associations brought upon the human being to conceive of existent and earthly ideas within the mind, a philosophical focus that is known as idealism.

Trying to develop an all-encompassing philosophical system, German philosopher Georg Wilhelm Friedrich Hegel wrote on topics ranging from logic and history to art and literature. He considered art to be one of the supreme developments of spiritual and absolute knowledge, surpassed only by religion and philosophy. In his excerpt from Introductory Lectures on Aesthetics, which were based on lectures that Hegel delivered between 1820 and 1829, Hegel discussed the relationship of poetry to other arts, particularly music, and explained that poetry was one mode of expressing the “Idea of beauty” that Hegel believed resided in all art forms. For Hegel, poetry was “the universal realization of the art of the mind.”

Nineteenth-century German philosopher Georg Wilhelm Friedrich Hegel disagreed with Kant's theory concerning the inescapable human ignorance of what things are in themselves, instead arguing for the ultimate intelligibility of all existence. Hegel also maintained that the highest achievements of the human spirit (culture, science, religion, and the state) are not the result of naturally determined processes in the mind, but are conceived and sustained by the dialectical activity.

Hegel applied the term dialectic to his philosophic system. Hegel believed that the evolution of ideas occurs through a dialectical process - that is, a conceptual lead to its opposite, and because of this conflict, a third view, the synthesis, arises. The synthesis is at a higher level of truth than the first two views. Hegel's work is based on the idealistic conceptualized representation of a universal mind that, through evolution, seeks to arrive at the highest level of self-awareness and freedom.

German political philosopher Karl Marx applied the conceptualize representation of dialectic social and economic processes. Marx's so-called dialectical materialism, frequently considered a revision of the Hegelian, dialectic of free, reflective intellect. Additional strains of idealistic thought can be found in the works of 19th-century Germans Johann Gottlieb Fichte and F.W.J. Schelling, 19th-century Englishman F.H. Bradley, 19th-century Americans Charles Sanders Peirce and Josiah Royce, and 20th-century Italian Benedetto Croce.

The monists, agreeing that only one basic substance exists, differ in their descriptions of its principal characteristic. Thus, in idealistic monism the substance is believed to be purely mental; in materialistic monism it is held to be purely physical, and in neutral monism it is considered neither exclusively mental nor solely physical. The idealistic position was held by the Irish philosopher George Berkeley, the materialistic by the English philosopher Thomas Hobbes, and the neutral by the Dutch philosopher Baruch Spinoza. The latter expounded a pantheistic view of reality in which the universe is identical with God and everything contains God's substance.

George Berkeley set out to challenge what he saw as the atheism and skepticism inherent in the prevailing philosophy of the early 18th century. His initial publications, which asserted that no objects or matter existed outside the human mind, were met with disdain by the London intelligentsia of the day. Berkeley aimed to explain his “Immaterialist theory, is part of the school of thought known as idealism, to a more general audience in Three Dialogues between Hylas and Philonous (1713).

The most famous exponent of dualism was the French philosopher René Descartes, who maintained that body and mind are radically different entities and that they are the only fundamental substances in the universe. Dualism, however, does not show how these basic entities are connected.

In the work of the German philosopher Gottfried Wilhelm Leibniz, the universe is held to consist of many distinct substances, or monads. This view is pluralistic in the sense that it proposes the existence of many separate entities, and it is monistic in its assertion that each monad reflects within itself the entire universe.

Other philosophers have held that knowledge of reality is not derived from some deductive principles, but is obtained only from experience. This type of metaphysic is called empiricism. Still another school of philosophy has maintained that, although an ultimate reality does exist, it is altogether inaccessible to human knowledge, which is necessarily subjective because it is confined to states of mind. Knowledge is therefore not a representation of external reality, but merely a reflection of human perceptions. This, nonetheless, is basically known as skepticism or agnosticism, in that their appreciation of the soul and the reality of God.

Immanuel Kant had circulated his thesis on, The Critique of Pure Reason in 1781. Three years later he expanded on his study of the modes of thinking with an essay entitled “What is Enlightenment?” In this 1784 essay, Kant challenged readers to “dare to know,” arguing that it was not only a civic but also a moral duty to exercise the fundamental freedoms of thought and expression.

Several major viewpoints were combined in the work of Kant, who developed a distinctive critical philosophy called Transcendentalism. His philosophy is agnostic in that it denies the possibility of a strict knowledge of ultimate reality; it is empirical in that it affirms that all knowledge arises from experience and is true of objects of actual and possible experience and it is rationalistic in that it maintains the deductive character of the structural principles of this empirical knowledge.

These principles are held to be necessary and universal in their application to experience, for in Kant's view the mind furnishes the archetypal forms and categories (space, time, causality, substance, and relation) to its sensations, and these categories are logically anterior to experience, although manifested only in experience. Their logical anteriority to comprehend an experience only makes these categories or structural principle’s transcendental. They transcend all experience, both actual and possible. Although these principles determine all experience, they do not in any way affect the nature of things in themselves. The knowledge of which these principles are the necessary conditions must not be considered, therefore, as constituting a revelation of things as they are in themselves. This knowledge concerns things only as far as they appear to human perception or as they can be apprehended by the senses. The argument by which Kant sought to fix the limits of human knowledge within the framework of experience and to demonstrate the inability of the human mind to penetrate beyond experience strictly by knowledge to the realm of ultimate reality makes up the critical feature of his philosophy, giving the key word to the titles of his three leading treatises, Critique of Pure Reason, Critique of Practical Reason, and Critique of Judgment. In the system propounded in these works, Kant sought also to reconcile science and religion in a world of two levels, comprising noumena, objects conceived by reason although not perceived by the senses, and phenomena, things as they appear to the senses and are accessible to material study. He maintained that, because God, freedom, and human immortality are noumenal realities, these conceptualized understandings were brought through the moral faith than through scientific knowledge. With the continuous development of science, the expansion of metaphysics to include scientific knowledge and methods became one of the major objectives of metaphysicians.

Some of Kant's most distinguished followers, notably Johann Gottlieb Fichte, Friedrich Schelling, Georg Wilhelm Friedrich Hegel, and Friedrich Schleiermacher, negated Kant's criticism in their elaborations of his transcendental metaphysics by denying the Kantian conception of the thing-in-itself. They thus developed an absolute idealism opposing Kant's critical transcendentalism.

Since the formation of the hypothesis of absolute idealism, the development of metaphysics has resulted in as many types of metaphysical theory as existed in pre-Kantian philosophy, despite Kant's contention that he had fixed definitely the limits of philosophical speculation. Notable among these later metaphysical theories is radical empiricism, or pragmatism, a native American form of metaphysics expounded by Charles Sanders Peirce, developed by William James, and adapted as instrumentalism by John Dewey; voluntarism, the foremost exponents of which are the German philosopher Arthur Schopenhauer and the American philosopher Josiah Royce; phenomenalism, as it is exemplified in the writings of the French philosopher Auguste Comte and the British philosopher Herbert Spencer, emergent evolution, or creative evolution, originated by the French philosopher Henri Bergson; and the philosophy of the organism, elaborated by the British mathematician and philosopher Alfred North Whitehead. The salient doctrines of pragmatism are that the chief function of thought is to guide action, that the meaning of concepts is to be sought in their practical applications, and that truth should be tested by the practical effects of belief; According to instrumentalism, ideas are instruments of action, and their truth is determined by their role in human experience. In the theory of voluntarism suspects that Will is postulated as the supreme manifestation of reality. The exponents of phenomenalism, who are sometimes called positivists, contend that everything can be analysed in actual or possible occurrences, or phenomena, and that anything that cannot be analysed in this manner cannot be understood. In emergent or creative evolution, the evolutionary process is characterized as spontaneous and unpredictable than mechanistically determined. The philosophy of the organism combines an evolutionary stress on constant process with a metaphysical theory of God, the eternal objects, and creativity.

In the 20th century the validity of metaphysical thinking has been disputed by the logical positivists and by the so-called dialectical materialism of the Marxists. The basic principle maintained by the logical positivists is the verifiability theory of meaning. According to this theory, a sentence has factual meaning only if it meets the test of observation. Logical positivists argue that metaphysical expressions such as “Nothing exists except material particles” and “Everything is part of one all-encompassing spirit” cannot be tested empirically. Therefore, according to the verifiability theory of meaning, these expressions have no factual cognitive meaning, although they can have an emotive meaning about human hopes and feelings.

The dialectical materialists assert that the mind is conditioned by and reflects material reality. Therefore, speculations that conceive of constructs of the mind as having any other than material reality are themselves strangling unreal and can result only in delusion. To these assertions metaphysicians reply by denying the adequacy of the verifiability theory of meaning and of material perception as the standard of reality. Both logical positivism and dialectical materialism, they argue, conceal metaphysical assumptions, for example, that everything is observable or at least connected with something observable and that the mind has no distinctive life of its own. In the philosophical movement known as existentialism, thinkers have contended that the questions of the nature of being and of the individual's relationship to it are extremely important and meaningful concerning human life. The investigation of these questions is therefore considered valid of whether or not its results can be verified objectively.

Since the 1950s the problems of systematic analytical metaphysics have been studied in Britain by Stuart Newton Hampshire and Peter Frederick Strawson, the former concerned, in the manner of Spinoza, with the relationship between thought and action, and the latter, in the manner of Kant, with describing the major categories of experience as they are embedded in language. In the United States, metaphysics have been pursued much in the spirit of positivism by Wilfred Stalker Sellars and Willard Van Orman Quine, wherefore Sellars has aspired to express metaphysical questions in linguistic terms, and Quine has attempted to decide whether the structure of language commits the philosopher to asserting the existence of any entities whatever and, if so, what kind. In these new formulations the issues of metaphysics and ontology remain vital.

Twentieth-century French intellectual Jean-Paul Sartre helped to develop existential philosophy through his writings, novels, and plays. Considerable amounts of Sartre’s workings focuses on the dilemma of choice faced by free individuals and on the challenge of creating meaning by acting responsibly in an indifferent world. In stating that “man is condemned to be free,” Sartre reminds us of the responsibility that accompanies human decisions.

Sartre first gave the term existentialism general currency by using it for his own philosophy and by becoming the leading figure of a distinct movement in France that became internationally influential after World War II. Sartre's philosophy is explicitly atheistic and pessimistic; He declared that human beings require a rational basis for their lives but are unable to achieve one, and thus human life is a “futile passion.” Sartre nevertheless, insisted that his existentialism be a form of humanism, and he strongly emphasized human freedom, choice, and responsibility. He eventually tried to reconcile these existentialist concepts with a Marxist analysis of society and history. Because, for Heidegger, one is what one does in the world, a phenomenological reduction to one's own private experience is impossible; and because human action consists of a direct grasp of objects, it is not necessary to posit a special mental entity called a meaning to account for intentionality. For Heidegger, being thrown into the world among things in the act of realizing projects is a more fundamental kind of intentionality than that revealed in merely staring at or thinking about objects, and it is this more fundamental intentionality that makes possible the directedness analysed by Husserl.

In the mid-1900s, French existentialist Jean-Paul Sartre attempted to adapt Heidegger's phenomenology to the philosophy of consciousness, in effect returning to the approach of Husserl. Sartre agreed with Husserl that consciousness is always directed at objects but criticized his claim that such directedness is possible only by means of special mental entities called meanings. The French philosopher Maurice Merleau-Ponty rejected Sartre's view that phenomenological description reveals human beings to be pure, isolated, and free consciousnesses. He stressed the role of the active, involved body in all human knowledge, thus generalizing Heidegger's insights to include the analysis of perception. Like Heidegger and Sartre, Merleau-Ponty is an existential phenomenologist, in that he denies the possibility of bracketing existence.

In the treatise Being and Nothingness, French writer Jean-Paul Sartre presents his existential philosophical framework. He reasons that the essential nothingness of human existence leaves individuals to take sole responsibility for their own actions. Shunning the morality and constraints of society, individuals must embrace personal responsibility to craft a world for themselves. Along with focussing on the importance of exercising individual responsibility, Sartre stresses that the understanding of freedom of choice is the only means of authenticating human existence. A novelist and playwright as well as a philosopher, Sartre will become a leader of the modern existentialist movement.

Although existentialist thought encompassing the uncompromising atheism of Nietzsche and Sartre and the agnosticism of Heidegger, its origin in the intensely religious philosophies of Pascal and Kierkegaard, foreshadowed its profound influence on 20th-century theologies. The 20th-century German philosopher Karl Jaspers, although he rejected explicit religious doctrines, influenced a contemporary theology through his preoccupation with transcendence and the limits of human experience. The German Protestant theologian’s Paul Tillich and Rudolf Bultmann, the French Roman Catholic theologian Gabriel Marcel, the Russian Orthodox philosopher Nikolay Berdyayev, and the German Jewish philosopher Martin Buber inherited many of Kierkegaard's concerns, especially that a personal sense of authenticity and commitment is essential to religious faith.

Renowned as one of the most important writers in world history, 19th-century Russian author Fyodor Dostoyevsky wrote psychologically intense novels that probed the motivations and moral justifications for his characters’ actions. Dostoyevsky commonly addressed themes such as the struggle between good and evil within the human soul and the idea of salvation through suffering. The Brothers Karamazov (1879-1880), generally considered Dostoyevsky’s best work, interlaces religious exploration with the story of a family’s violent quarrels over a woman and a disputed inheritance.

Twentieth-century writer and philosopher Albert Camus examined what he considered the tragic inability of human beings to understand and transcend their intolerable conditions. In his work Camus presented an absurd and seemingly unreasonable world in which some people futilely struggle to find meaning and rationality while others simply refuse to care. For example, the main character of The Stranger (1942) kills a man on a beach for no reason and accepts his arrest and punishment with a dispassion. In contrast, in The Plague (1947), Camus introduces characters who act with courage in the face of absurdity.

Several existentialist philosophers used literary forms to convey their thought, and existentialism has been as vital and as extensive a movement in literature as in philosophy. The 19th-century Russian novelist Fyodor Dostoyevsky is probably the greatest existentialist literary figure. In Notes from the Underground (1864), the alienated antihero rages against the optimistic assumptions of rationalist humanism. The view of human nature that emerges in this and other novels of Dostoyevsky is that it is unpredictable and perversely self-destructive; Only Christian love can save humanity from itself, but such love cannot be understood philosophically. As the character Alyosha says in The Brothers Karamazov (1879-80), “We must love life more than the meaning of it.”

The unfolding narrations that launch the chronological lines are attributed to the Russian novelist Fyodor Dostoyevsky’s Notes from Underground (1864) -“I am a sick man . . . I am a spiteful man”- are among the most famous in 19th-century literature. Published five years after his release from prison and involuntary, military service in Siberia, Notes from Underground is a sign of Dostoyevsky’s rejection of the radical social thinking he had embraced in his youth. The unnamed narrator is antagonistic in tone, questioning the reader’s sense of morality plus the foundations of rational thinking.

In the 20th century, the novels of the Austrian Jewish writer Franz Kafka, such as The Trial 1925, translated, 1937, and The Castle (1926, translated, 1930), present isolated men confronting vast, elusive, menacing bureaucracies; Kafka's themes of anxiety, guilt, and solitude reflect the influence of Kierkegaard, Dostoyevsky, and, the influence of Nietzsche is also discernible in the novels of the French writer’s André Malraux and in the plays of Sartre. The work of the French writer Albert Camus is usually associated with existentialism because of the prominence of such themes as the apparent absurdity and futility of life, the indifference of the universe, and the necessity of engagement in a just cause. Existentialist themes are also reflected in the theatre of the absurd, notably in the plays of Samuel Beckett and Eugène Ionesco. In the United States, the influence of existentialism on literature has been more indirect and diffused, traces of Kierkegaard's thought can be found in the novels of Walker Percy and John Updike, and various existentialist themes are apparent in the work of such diverse writers as Norman Mailer, John Barth, and Arthur Miller.

Nietzsche’s concept has often been interpreted as one that postulates a master-slave society and has been identified with totalitarian philosophies. Many scholars deny the connection and attribute it to misinterpretation of Nietzsche 's work.

For him, an undertaking to characterize its method of analysis and criticism, under which we should feel quite familiar with, just as the extracted forms of familiarity are basic throughout which contextual matters of representation have previously been faced. He is encouraging as a new possibility for our lives a program that has strong and obvious roots in certain forms of Romanticism. Thus, is to illustrate how Nietzsche, the greater burden of tradition, as he is deeply connected to categorical priorities as to finding the considerations of which make of tradition.

Yet, Kant tried to solve the crisis generated by Locke and brought to a climax by Hume; his proposed solution combined elements of rationalism with elements of empiricism. He agreed with the rationalists that one can have an exact and certain opening for knowledge, but he followed the empiricists in holding that such knowledge is more informative about the structure of thought than about the world outside thought.

During the 19th century, the German philosopher Georg Wilhelm Friedrich Hegel revived the rationalist claim that absolutely certain knowledge of reality can be obtained by equating the processes of thought, of nature, and of history. Hegel inspired an interest in history and a historical approach to knowledge by Herbert Spencer in Britain and by the German school of historicisms. Spencer and the French philosopher Auguste Comte brought attention to the importance of sociology as a branch of knowledge, and both extended the principles of empiricism to the study of society.

The American school of pragmatism, founded by the philosophers Charles Sanders Peirce, William James, and John Dewey at the turn of this century, carried empiricism by maintaining that knowledge is an instrument of action and that all beliefs should be judged by their usefulness as rules for predicting experiences.

In the early 20th century, epistemological problems were discussed thoroughly, and subtle shades of difference grew into rival schools of thought. Special attention was given to the relation between the act of perceiving something, the object directly perceived, and the thing that can be said to be known because of the perception. The phenomenalists contended that the objects of knowledge are the same as the objects perceived. The neorealist argued that one has direct perceptions of physical objects or parts of physical objects, than of one's own mental states. The critical realists took a middle position, holding that although one perceives only sensory data such as colours and sounds, these stand for physical objects and provide knowledge of it.

A method for dealing with the problem of clarifying the relation between the act of knowing and the object known was developed by the German philosopher Edmund Husserl. He outlined an elaborate procedure that he called phenomenology, by which one is said to be able to distinguish the way things are from the way one thinks they really are, thus gaining a more precise understanding of the conceptual foundations of knowledge.

During the second quarter of the 20th century, two schools of thought emerged, each indebted to the Austrian philosopher Ludwig Wittgenstein. The first of these schools, logical empiricism, or logical positivism, had its origins in Vienna, Austria, but it soon spread to England and the United States. The logical empiricists insisted that there be only one kind of knowledge: Scientific knowledge; In that, any legitimate claim that is reinforced through the knowledge claim must be verifiable in experience. So that, much that had passed for philosophy was neither true nor false but literally meaningless. Finally, following Hume and Kant, a clear distinction must be maintained between analytic and synthetic statements. The so-called verifiability criterion of meaning has undergone changes because of discussions among the logical empiricists themselves, and their critics, but has not been discarded. More recently, the sharp distinction between the analytic and the synthetic has been attacked by many of philosophers, chiefly by American philosopher W.V.O. Quine, whose overall approach is in the pragmatic tradition.

The latter of these recent schools of thought, generally called linguistic analysis, or ordinary language philosophy, seem to break with traditional epistemology. The linguistic analysts undertake to examine the actualized directive in key epistemological terms are used-terms such as knowledge, perception, and probability - and to formulate definitive rules for their use to avoid verbal confusion.

John Austin (1911-1960), a British philosopher, a prominent figure in 20th-century analytic and linguistic philosophy, was born in Lancaster, England, he was educated at the University of Oxford. After serving in British intelligence during World War II (1939-1945), he returned to Oxford and taught philosophy until his death.

Austin viewed the fundamental philosophical task to be that of annualizing and clarifying ordinary language. He considered attention to distinctions drawn in ordinary language as the most fruitful starting point for philosophical inquiry. Austin's linguistic work led to many influential concepts, such as the speech-act theory. This arose from his observation that many utterances do not merely describe reality but also affect reality; They are the performance of some act than a report of its performance. Austin came to believe that all languages are performatives and is made up of speech acts. Seven of his essays were published during his lifetime. Posthumously published works include Philosophical Papers (1961), Sense and Sensibilia (1962), and How to Do Things with Words (1962).

Thomas Hill Green (1836-1882), British philosopher and educator, who led the revolt against empiricism, the dominant philosophy in Britain during the latter part of the 19th century. He was born in Birkin, Yorkshire, England, and educated at Rugby and the University of Oxford. He taught at Oxford from 1860 until his death, initially as a fellow and after 1878 as Whyte Professor of Moral Philosophy.

A disciple of the German philosopher Georg Wilhelm Friedrich Hegel, Green insisted that consciousness provide the necessary basis for both knowledge and morality. He argued that a person's highest good is realization and that the individual can obtainably achieve realization, only in society. Society has an obligation, in turn, to provide for the good of all its members. The political implications of his philosophy laid the basis for sweeping social-reform legislation in Britain. Besides being the most influential British philosopher of his time, Green was a vigorous champion of popular education, temperance, and political liberalism. His writings include Prolegomena to Ethics (1883) and Lectures on the Principles of Political Obligation (1895), as both liberalized materials were posthumously published.

The outcome of this crisis in economic and social thinking was the development of positive liberalism. As noted, certain modern liberals, like the Austrian-born economist Friedrich August von Hayek, consider the positive attitude an essential betrayal of liberal ideals. Others, such as the British philosophers Thomas Hill Green and Bernard Bosanquet, known as the “Oxford Idealists,” ‘devised a so-called organic liberalism designed to hinder hindrances to the good life’. Green and Bosanquet advocated positive state action to promote -fulfilment, that is, to prevent economic monopoly, abolish poverty, and secure people against the disabilities of sickness, unemployment, and old age. The identified liberalism came alongside with the extension of democracy.

Most of the philosophical discussions of consciousness arose from the mind-body issues posed by René Descartes in the 17th century. Descartes asked: Is the mind, or consciousness, independent of matter? Is consciousness extended (physical) or unexceeded (nonphysical)? Is consciousness determinative, or is it determined? English philosophers such as John Locke equated consciousness with physical sensations and the information they provide, whereas European philosophers such as Gottfried Wilhelm Leibniz and Immanuel Kant gave a more central and active role to the awakening of consciousness.

The philosopher who most directly influenced subsequent exploration of the subject of consciousness was the 19th-century German educator Johann Friedrich Herbart, who wrote that ideas had quality and intensity and that they may inhibit or simplify every other. Thus, ideas may pass from “states of reality” (consciousness) to “states of tendency” (unconsciousness), with the dividing line between the two states being described as the threshold of consciousness. This formulation of Herbart clearly presages the development, by the German psychologist and physiologist Gustav Theodor Fechner, of the psycho-physical measurement of sensation thresholds, and the later development by Sigmund Freud of the concept of the unconscious.

No simple, agreed-upon definition of consciousness exists. Attempted definitions tend to be tautological (for example, consciousness defined as awareness) or merely descriptive (for example, consciousness described as sensations, thoughts, or feelings). Despite this problem of definition, the subject of consciousness has had a remarkable history. There had occasioned that the primary subject matter of psychology, consciousness as an area of study has suffered almost a total dissolution, later reemerging to become a topic of current interest.

The experimental analysis of consciousness dates from 1879, when the German psychologist Wilhelm Max Wundt started his research laboratory. For Wundt, the task of psychology was the study of the structure of consciousness, which extended well beyond sensations and included feelings, images, memory, attention, duration, and movement. Because early interest focussed on the content and dynamics of consciousness, it is not surprising that the central methodology of such studies was introspection; That is, subjects reported on the mental contents of their own consciousness. This introspective approach was developed most fully by the American psychologist Edward Bradford Titchener at Cornell University. Setting his task as that of describing the structure of the mind, Titchener attempted to detail, from introspective-reports, the dimensions of the elements of consciousness. For example, taste was “dimensionalized” into four basic categories, sweet, sour, salt, and bitter. This approach was known as structuralism.

By the 1920s, however, a remarkable revolution had occurred in psychology that was essentially to remove considerations of consciousness from psychological research for some fifty years: Behaviourism captured the field of psychology. The main initiator of this movement was the American psychologist John Broadus Watson. When in a 1913 article, Watson stated, ‘I believe that we can write on the preliminaries of psychology and never use the term’s consciousness, mental states, mind . . . imagery and the like.’ Psychologists then turned almost exclusively to behaviours, as described as to stimulus and response, and consciousness was totally bypassed as a subject. A survey of eight leading introductory psychology texts published between 1930 and the 1950s found no mention of the topic of consciousness in five texts, and in two it was treated as a historical curiosity.

Impelled of the 1950s, were, however, an interest in the subject of consciousness returned, specifically in those subjects and techniques relating to altered states of consciousness, such in sleep and dreams, meditation, biofeedback, hypnosis, and drug-induced states. An increase in sleep and dream research was directly fuelled by a discovery used for the nature of consciousness. A physiological indicator of the dream state was found: At roughly 90-minute intervals, the eyes of sleepers were observed to move rapidly, and while the sleepers' brain waves would show a pattern resembling the waking state. When people were awakened during these periods of rapid eye movement, they usually reported dreams, whereas if awakened at other times they did not. This and other research clearly suggested that sleep, once considered a passive state, were instead an active state of consciousness.

During the 1960s, an increased search for “higher levels” of consciousness through meditation resulted in a growing interest in the practices of Zen Buddhism and Yoga from Eastern cultures. A full flowering of this movement in the United States was seen in the development of training programs, such as Transcendental Meditation, that was, - directed procedures of physical relaxation and focussed attention. Biofeedback techniques also were developed to bring body systems involving factors such as blood pressure or temperature under voluntary control by providing response from the body, so that subjects could learn to control their responses. For example, researchers found that persons could control their brain-wave patterns to some extent, particularly the so-called alpha rhythms generally associated with a relaxed, meditative state. This finding was especially used for those interested in consciousness and meditation, and several ‘alpha training’ programs emerged.

Another subject that led to increased interest in altered states of consciousness was hypnosis, which involves a transfer of conscious control from the one person to another person. Hypnotism has had a long and intricate history in medicine and folklore and has been intensively studied by psychologists. Much has become known about the hypnotic state, compared with individual suggestibility and personality traits; The subject has now been largely demythologized, and the limitations of the hypnotic state are well known. Despite the increasing use of hypnosis, however, much remains to be learned about this unusual state of focussed attention.

Many people in the 1960s experimented with the psychoactive drugs known as hallucinogens, which produce mental or mind distortions of conscious dialectic awareness. The most prominent of these drugs is lysergic acid diethylamide, or LSD; Mescaline and psilocybin, the latter two have long been associated with religious ceremonies in various cultures. LSD, because of its radical thought-modifying properties, was initially explored for its so-called mind-expanding potential and for its psychotomimetic effects (imitating psychoses). Little positive use, however, has been found for these. As the metaphysic of an orderly but simple linkage between environment and behaviour became unsatisfactory in recent decades. Interest in altered states of consciousness may be taken as a visible sign of renewed interest in the topic of consciousness. That persons are active and intervening participants in their behaviour has become increasingly clear. Environments, rewards, and punishments are not simply defined by their physical character. Memories are organized, not simply stored in the composites of memory. An entirely new area called cognitive psychology has emerged that centre on these concerns. In the study of children, increased attention is being paid to how they understand, or perceive, the world at different ages. In the field of animal behaviour, researchers increasingly emphasize the inherent characteristics resulting from the way a species has been shaped to respond adaptively to the environment. Humanistic psychologists, with a concern for-actualization and growth, have emerged after a long period of silence. Throughout the development of clinical and industrial psychology, the conscious states of persons as to their current feelings and thoughts were important. The role of consciousness, however, was often de-emphasised in favour of unconscious needs and motivations. Trends can be seen, however, toward a new emphasis on the nature of states of consciousness.

When the German psychologist Wilhelm Max Wundt started his research laboratory. For Wundt, the task of psychology was the study of the structure of consciousness, which extended well beyond sensations and included feelings, images, memory, attention, duration, and movement. Because early interest focussed on the content and dynamics of consciousness, it is not surprising that the central methodology of such studies was introspection; that is, subjects reported on the mental contents of their own consciousness. This introspective approach was developed most fully by the American psychologist Edward Bradford Titchener at Cornell University. Setting his task as that of describing the structure of the mind, Titchener attempted to detail, from introspective self-reports, the dimensions of the elements of consciousness.

Scientists have long since considered the nature of consciousness without producing a fully satisfactory definition. In the early 20th century American philosopher and psychologist William James suggested that consciousness is a mental process involving both attention to external stimuli and short-term memory. Later scientific explorations of consciousness mostly expanded upon James’s work. In this, a 1997 special issue of Scientific American, Nobel laureate Francis Crick, who helped determine the structure of DNA, and fellow biophysicists Christof Koch explain how experiments on vision might deepen our understanding of consciousness.

Thirteenth-century Italian philosopher and theologian Saint Thomas Aquinas attempted to synthesize Christian belief with a broad range of human knowledge, embracing diverse sources such as Greek philosopher Aristotle and Islamic and Jewish scholars. His thought exerted lasting influence on the development of Christian theology and Western philosophy. Author Anthony Kenny examines the complexities of Aquinas’s concepts of substance and accident.

In the 5th century Bc, the Greek Sophists questioned the possibility of reliable and objective knowledge. Thus, a leading Sophist, Gorgias, argued that nothing really exists, that if anything did exist it could not be known, and that if knowledge were possible, it could not be communicated. Another prominent Sophist, Protagoras, maintained that no person's opinions can be said to be correct than another's, because each is the sole judge of his or her own experience. Plato, following his illustrious teacher Socrates, tried to answer the Sophists by postulating the existence of a world of unchanging and invisible forms, or ideas, about having exact and accurate knowledge is possible. The thing’s one sees and touches, they maintained, are imperfect copies of the pure forms studied in mathematics and philosophy. Accordingly, only the abstract reasoning of these disciplines yields genuine knowledge, whereas reliance on sense perception produces vague and inconsistent opinions. They concluded that philosophical contemplation of the unseen world of forms is the highest goal of human life.

Aristotle followed Plato in regarding abstract knowledge as superior to any other, but disagreed with him as to the proper method of achieving it. Aristotle maintained that most knowledge is derived from experience. Knowledge is gained either directly, by abstracting the defining traits of a species, or indirectly, by deducing new facts from those already known, according to the rules of logic. Careful observation and strict adherence to the rules of logic, which were first set down in systematic form by Aristotle, would help guard against the pitfalls the Sophists had exposed. The Stoic and Epicurean schools agreed with Aristotle that knowledge originates in sense perception, but against both Aristotle and Plato they maintained that philosophy is to be valued as a practical guide to life, than as an end in it.

After many centuries of declining interest in rational and scientific knowledge, the Scholastic philosopher Saint Thomas Aquinas and other philosophers of the Middle Ages helped to restore confidence in reason and experience, blending rational methods with faith into a unified system of beliefs. Aquinas followed Aristotle in regarding perception as the starting point and logic as the intellectual procedure for arriving at reliable knowledge of nature, but he considered faith in scriptural authority as the main source of religious belief.

From the 17th to the late 19th century, the main issue in epistemology was reasoning versus sense perception in acquiring knowledge. For the rationalists, of whom the French philosopher René Descartes, the Dutch philosopher Baruch Spinoza, and the German philosopher Gottfried Wilhelm Leibniz were the leaders, the main source and final test of knowledge was deductive reasoning based on evident principles, or axioms. For the empiricists, beginning with the English philosophers Francis Bacon and John Locke, the main source and final test of knowledge was sense perception.

French thinker René Descartes applied rigorous scientific methods of deduction to his exploration of philosophical questions. Descartes is probably best known for his pioneering work in philosophical skepticism. Author Tom Sorell examines the concepts behind Descartes’s work Meditationes de Prima Philosophia (1641, Meditations on First Philosophy), focussing on its distinctive use of logic and the reactions it aroused.

Bacon inaugurated the new era of modern science by criticizing the medieval reliance on tradition and authority and by setting down new rules of scientific method, including the first set of rules of inductive logic ever formulated. Locke attacked the rationalist belief that the principles of knowledge are intuitively evident, arguing that all knowledge is derived from experience, either from experience of the external world, which stamps sensations on the mind, or from internal experience, in which the mind reflects on its own activities. Human knowledge of external physical objects, he claimed, is always subject to the errors of the senses, and he concluded that one cannot have absolutely certain knowledge of the physical world.

George Berkeley conceded with Locke who retained in the possibility of knowing that some of our ideas (those of primary qualities) give us an adequate representation of the world around us, and that the various sources of knowledge, and above all the limits and doubtful capacities of our minds. It is through this that Locke connected his epistemology with the defence of religious toleration. Nevertheless, Berkeley denied Locke's belief that a distinction can be made between ideas and objects. The British philosopher David Hume continued the empiricist tradition, but he did not accept Berkeley's conclusion that knowledge was of ideas only. He divided all knowledge into two kinds: Knowledge of relations of ideas - that is, the knowledge found in mathematics and logic, which is exact and certain but provide no information about the world; and knowledge of matters of fact -that is, the knowledge derived from sense perception. Hume argued that most knowledge of matters of fact depends upon cause and effect, and since no logical connection exists between any given cause and its effect, one cannot hope to know any future matter of fact with certainty. Thus, the most reliable laws of science might not remain true, that of a conclusion that had a revolutionary impact on philosophy.

During the 19th century, the German philosopher Georg Wilhelm Friedrich Hegel revived the rationalist claim that absolutely certain knowledge of reality can be obtained by equating the processes of thought, of nature, and of history. Hegel inspired an interest in history and a historical approach to knowledge emphasized by Herbert Spencer in Britain and by the German school of historicisms. Spencer and the French philosopher Auguste Comte brought attention to the importance of sociology as a branch of knowledge, and both extended the principles of empiricism to the study of society.

In the early 20th century, epistemological problems were discussed thoroughly, and subtle shades of difference grew into rival schools of thought. Special attention was given to the relation between the act of perceiving something, the object directly perceived, and the thing that can be said to be known because of the perception. The phenomenalists contended that the objects of knowledge are the same as the objects perceived. The neorealist argued that one has direct perceptions of physical objects or parts of physical objects, than of one's own mental states. The critical realists took a middle position, holding that although one perceives only sensory data such as colours and sounds, these stand for physical objects and provide knowledge of it.

During the second quarter of the 20th century, two schools of thought emerged, each indebted to the Austrian philosopher Ludwig Wittgenstein. The first of these schools, logical empiricism, or logical positivism, had its origins in Vienna, Austria, but it soon spread to England and the United States. The logical empiricists insisted that there be only one kind of knowledge - scientific knowledge, that any valid knowledge claim must be verifiable in experience: Consequently, that much that had passed for philosophy was neither true nor false but literally meaningless. Finally, following Hume and Kant, a clear distinction must be maintained between analytic and synthetic statements.

Of these recent schools of thought, generally called linguistic analysis, or ordinary language philosophy, seems to break with traditional epistemology. The linguistic analysts undertake to examine the actualization laced upon the way major epistemological terms are used-terms such as knowledge, perception, and probability - and to formulate definitive rules for their use to avoid verbal confusion. British philosopher John Langshaw Austin argued, for example, that to say a statement was truly added but nothing to the statement except a promise by the speaker or writer. Austin does not consider truth a quality or property attaching to statements or utterances.

Positivism, is a contained system of philosophy based on experience and empirical knowledge of natural phenomena, in which metaphysics and theology are regarded as inadequate and imperfect systems of knowledge.

The doctrine was first called positivism by the 19th-century French mathematician and philosopher Auguste Comte, but some positivist concepts may be traced to the British philosopher David Hume, the French philosopher Duc de Saint-Simon, and Immanuel Kant.

The keystone of Kant's philosophy, sometimes called critical philosophy, is contained in his Critique of Pure Reason (1781), in which he examined the bases of human knowledge and created an individual epistemology. Like earlier philosophers, Kant differentiated modes of thinking into analytic and synthetic propositions. An analytic proposition is one in which the predicate is contained in the subject, as in the statement “Black houses are houses.” The truth of this type of proposition is evident, because to state the reverse would be to make the proposition self-contradictory. Such propositions are called analytic because truth is discovered by the analysis of the concept itself. Synthetic propositions, on the other hand, are those that cannot be arrived at by pure analysis, as in the statement “The house is black.” All the common propositions that result from experience of the world are synthetic.

Propositions, according to Kant, can also be divided into two other types, empirical and deductive. Empirical propositions depend entirely on sense perception, but a deductive proposition has for itself -, a fundamental validity and is not based on such perception. The difference between these two types of propositions may be illustrated by the empirical “The house is black” and the deductivity “Two plus two makes four.” Kant's thesis in the Critique is that making synthetic speculative judgment is possible. This philosophical position is usually known as transcendentalism. In describing how this type of judgment is possible Kant regarded the objects of the material world as fundamentally unknowable, from the point of view of reason, they serve merely as the raw material from which sensations are formed. Objects of themselves have no existence, and space and time exists only as part of the mind, as “intuitions” by which perceptions are measured and judged.

Besides these intuitions, Kant stated that several deductive concepts, which he called categories, also exists. He divided the categories into four groups concerning quantity, which are unity, plurality, and totality. Those concerning quality values, for which reality, negation, and limitation, are the concerning relations under which are substance-and-accident, cause-and-effect, and reciprocity, all of these under consideration contend with the concerns of modality, in that they are possibly to explicate upon existence, and necessity. The intuitions and the categories can be applied to make judgments about experiences and perceptions, but cannot, according to Kant, be applied to abstract ideas such as freedom and existence without leading to inconsistencies in the form of coupling incomparable propositions, or “antinomies,” in which both members of each pair can be proven true.

In the Metaphysics of Ethics (1797) Kant described his ethical system, which is based on a belief that the reason is the final authority for morality. Actions of any sort, he believed, must be undertaken from a sense of duty dictated by reason, and no action had rendered for expediency or solely in obedience to law or custom can be regarded as moral. Kant described two types of commands given by reason, the hypothetical imperative, which dictates a given course of action to reach a specific end, and the categorical imperative, which dictates a course of action that must be followed because of its rightness and necessity. The categorical imperative is the basis of morality and was stated by Kant in these words: “Act as if the maxim of your action were to become a vessel through which means were a way of your will and general common law.”

Kant's ethical ideas are a logical outcome of his belief in the fundamental freedom of the individual as stated in his Critique of Practical Reason (1788). This freedom he did not regard as the lawless freedom of anarchy, but as the freedom of a self-government, the freedom to obey consciously the laws of the universe as revealed by reason. He believed that the welfare of each individual should properly be regarded as an end, that the world was progressing toward an ideal society in which reason would “bind every law giver to make his laws so that they could have sprung from the united will of an entire people, and to regard every subject, in as far as he wishes to be a citizen, based on whether he has conformed to that will.” In his treatise Perpetual Peace (1795) Kant advocated the establishment of a world federation of republican states.

Kant had a greater influence than any other philosopher of modern times. Kantian philosophy, particularly as developed by the German philosopher Georg Wilhelm Friedrich Hegel, was the basis on which the structure of Marxism was built; Hegel's dialectical method, which was used by Karl Marx, was an outgrowth of the method of reasoning by “antinomies” that Kant used. The German philosopher Johann Fichte, Kant's pupil, rejected his teacher's division of the world into objective and subjective parts and developed an idealistic philosophy that also had great influence on 19th-century socialists. One of Kant's successors at the University of Königsberg, J.F. Herbart, incorporated some of Kant's ideas in his system of pedagogy.

Besides works on philosophy, Kant wrote many treatises on various scientific subjects, many in the field of physical geography. His most important scientific work was General Natural History and Theory of the Heavens (1755), in which he advanced the hypothesis of the formation of the universe from a spinning nebula hypothesis that later was developed independently by Pierre de Laplace.

Among Kant's other writings are Prolegomena to Any Future Metaphysics (1783), Metaphysical Rudiments of Natural Philosophy (1786), Critique of Judgment (1790), and Religion Within the Boundaries of Pure Reason (1793).

Metaphysics, is the branch of philosophy that is concerned with the nature of ultimate reality. Metaphysic is customarily divided into ontology, which deals with the question of how many fundamentally distinct sorts of entities compose the universe, and metaphysics proper, which is concerned with describing the most general traits of reality. These general traits together define reality and would presumably characterize any universe whatever. Because these traits are not peculiar to this universe, but are common to all possible universes, metaphysics may be conducted at the highest level of abstraction. Ontology, by contrast, because it investigates the ultimate divisions within this universe, is more closely related to the physical world of human experience.

The term metaphysic is believed to have been derived in Rome about 70Bc, with the Greek Peripatetic philosopher Andronicus of Rhodes (flourished 1st century Bc) in his edition of the works of Aristotle. In the arrangement of Aristotle's works by Andronicus, the treatise originally called First Philosophy, or Theology, followed the treatise Physics. Hence, the First Philosophy became known as meta (ta) physica, or “following (the) Physics,” later shortened to Metaphysics. The word took on the connotation, in popular usage, of matters transcending material reality. In the philosophic sense, however, particularly as opposed to the use of the word by occultists, metaphysic apply to all reality and is distinguished from other forms of inquiry by its generality.

The subjects treated in Aristotle's Metaphysics (substance, causality, the nature of being, and the existence of God) fixed the content of metaphysical speculation for centuries. Among the medieval Scholastic philosophers, metaphysics were known as the “transphysical science” on the assumption that, by means of it, the scholar philosophically could make the transition from the physical world to a world beyond sense perception. The 13th-century Scholastic philosopher and theologian St. Thomas Aquinas declared that the cognition of God, through a causal study of finite sensible beings, was the aim of metaphysics. With the rise of scientific study in the 16th century the reconciliation of science and faith in God became an increasingly important problem.

Before the time of Kantian metaphysics that was characterized by a tendency to construct theories based on deductive knowledge, that is, knowledge derived from reason alone, in contradistinctions to empirical knowledge, which is gained by reference to the facts of experience. From deductive knowledge were to signify a deduced general proposition held to be true of all things. The method of inquiry based on deductive principles is known as rationalistic. This method may be subdivided into monism, which holds that the universe is made up of a single fundamental substance; Dualism, may be viewed as the belief in two such substances, as the pluralism for which proposes the existence of several fundamental properties.

The monists, agreeing that only one basic substance exists, differ in their descriptions of its principal characteristics. Thus, in idealistic monism the substance is believed to be purely mental; in materialistic monism it is held to be purely physical, and in neutral monism it is considered neither exclusively mental nor solely physical. The idealistic position was held by the Irish philosopher George Berkeley, the materialistic by the English philosopher Thomas Hobbes, and the neutral by the Dutch philosopher Baruch Spinoza. The latter expounded a pantheistic view of reality in which the universe is identical with God and everything contains God's contention.

George Berkeley set out to challenge what he saw as the atheism and skepticism inherent in the prevailing philosophy of the early 18th century. His initial publications, which asserted that no objects or matter existed outside the human mind, were met with disdain by the London intelligentsia of the day. Berkeley aimed to explain his “Immaterialist” theory, part of the school of thought known as idealism, to a more general audience in Three Dialogues between Hylas and Philonous (1713).

The most famous exponent of dualism was the French philosopher René Descartes, who maintained that body and mind are radically different entities and that they are the only fundamental substances in the universe. Dualism, however, does not show how these basic entities are connected.

In the work of Gottfried Wilhelm Leibniz, the universe is held to consist of many distinct substances, or monads. This view is pluralistic in the sense that it proposes the existence of many separate entities, and it is monistic in its assertion that each monad reflects within itself the entire universe.

Other philosophers have held that knowledge of reality is not derived from theoretical principles, but is obtained only from experience. This type of metaphysic is called empiricism. Still another school of philosophy has maintained that, although an ultimate reality does exist, it is altogether inaccessible to human knowledge, which is necessarily subjective because it is confined to states of mind. Knowledge is therefore not a representation of external reality, but merely a reflection of human perceptions. This view is known as skepticism or agnosticism in respect to the soul and the reality of God.

It is empirical in that it affirms that all knowledge arises from experience and is true of objects of actual and possible experience; and it is rationalistic in that it maintains the speculative character of the structural principles of this empirical knowledge.

These principles are held to be necessary and universal in their application to experience, for in Kant's view the mind furnishes the archetypal forms and categories such that experience, is manifested only in experience. Their logic precedes the experience from which of these categories or structural principle’s are made transcendental. They transcend all experience, both actual and possible. Although these principles determine all experience, they do not in any way affect the nature of things in themselves. The knowledge of which these principles are the necessary conditions must not be considered, therefore, as constituting a revelation of things as they are in themselves. This knowledge concerns things only insofar as they appear to human perception or as they can be apprehended by the senses. The argument by which Kant sought to fix the limits of human knowledge within the framework of experience and to demonstrate the inability of the human mind to penetrate beyond experience strictly by knowledge to the realm of ultimate reality constitutes the critical feature of his philosophy, given the key word to the titles of his three leading treatises, Critique of Pure Reason, Critique of Practical Reason, and Critique of Judgment. He maintained that, because God, freedom, and human immortality are noumenal realities, these concepts are understood through moral faith rather than through scientific knowledge. With the continuous development of science, the expansion of metaphysics to include scientific knowledge and methods became one of the major objectives of metaphysicians.

Since the formation of the hypothesis of absolute idealism, the development of metaphysics has resulted in as many types of metaphysical theory as existed in pre-Kantian philosophy, despite Kant's contention that he had fixed definitely the limits of philosophical speculation. Notable among these later metaphysical theories is radical empiricism, or pragmatism, a native American form of metaphysics expounded by Charles Sanders Peirce, developed by William James, and adapted as instrumentalism by John Dewey; Voluntarism, is the foremost exponents of which are the German philosopher Arthur Schopenhauer and the American philosopher Josiah Royce, for phenomenalism is exemplified in the writings of the French philosopher Auguste Comte and the British philosopher Herbert Spencer, emergent evolution, or creative evolution, originated by the French philosopher Henri Bergson, and the philosophy of the organism, which is elaborated by the British mathematician and philosopher Alfred North Whitehead. The salient doctrines of pragmatism are that the chief function of thought is to guide action, that the meaning of concepts is to be sought in their practical applications, and that truth should be tested by the practical effects of belief: According to instrumentalism, ideas are instruments of action, and their truth is determined by their role in human experience. In the theory of voluntarism ‘the Determination of Will’ is postulated as the supreme manifestation of reality. The exponents of phenomenalism, who are sometimes called positivists, contend that everything can be analysed as to actual or possible occurrences, or phenomena, and that anything that cannot be analysed in this manner cannot be understood. In emergent or creative evolution, the evolutionary process is characterized as spontaneous and unpredictable rather than mechanistically determined. The philosophy of the organism combines an evolutionary stress on constant process with a metaphysical theory of God, the eternal objects, and creativity what is Mysticism but an immediate, direct, intuitive knowledge of God or of ultimate reality attained through personal religious experience? Wide variations are found in both the form and the intensity of mystical experience. The authenticity of any such experience, however, is not dependent on the form, but solely on the quality of life that follows the experience. The mystical life is characterized by enhanced vitality, productivity, serenity, and joy as the inner and outward aspects harmonize in union with God.

Daoism (Taoism) emphasizes the importance of unity with nature and of yielding to the natural flow of the universe. This contrasts greatly with Confucianism, another Chinese philosophy, which focuses on society and ethics. The fundamental text of Daoism is traditionally attributed to Laozi, a legendary Chinese philosopher who supposedly lived in the 500s Bc.

Elaborate philosophical theories have been developed in an attempt to explain the phenomena of mysticism. Thus, in Hindu philosophy, and particularly in the metaphysical system known as the Vedanta, the self or atman in man is identified with the supreme self, or Brahman, of the universe. The apparency of separateness and individuality of beings and events are held to be an illusion (Sanskrit maya), or convention of thought and feeling. This illusion can be dispelled through the realization of the essential oneness of atman and Brahman. When the religious initiate has overcome the beginningless, ignorance (Sanskrit avidya) upon which, depends on the apparent separability of subject and objects, of self and no self, a mystical state of liberation, or moksha, is attained. The Hindu philosophy of Yoga incorporates perhaps the most comprehensive and rigorous discipline ever designed to transcend the sense of personal identity and to clear the way for an experience of union with the divine self. In China, Confucianism is formalistic and antimystical, but Daoism, as expounded by its traditional founder, the Chinese philosopher Laozi (Lao-tzu), has a strong mystical emphasis.

The philosophical ideas of the ancient Greeks were predominantly naturalistic and rationalistic, but an element of mysticism found expression in the Orphic and other sacred mysteries. A late Greek movement, Neoplatonism, was based on the philosophy of Plato and shows the influence of the mystery religions. The Muslim Sufi sect embraces a form of theistic mysticism closely resembling that of the Vedanta. The doctrines of Sufism found their most memorable expression in the symbolic works of the Persian poets Mohammed Shams od-Din, better known as Hafiz, and Jalal al-Din Rumi, and in the writings of the Persian al-Ghazali. Mysticism of the pre-Christian period is evidenced in the writings of the Jewish-Hellenistic philosopher Philo Judaeus.

The Imitation of Christ, the major devotional works of medieval German monk Thomas à Kempis, was written more than 500 years ago to aid fellow members of religious orders. The book, simple in language and style, has become one of the most influential works in Christian literature. It is a thoughtful yet practical treatise that guides the reader toward a spiritual union with God through the teachings of Jesus Christ and the monastic qualities of poverty, chastity, and obedience. In this, Kempis urges Christians to live each day as if it might be their last.

Saint Paul was the first great Christian mystic. The New Testament writings’ best known for their deeply mystical emphasis are Paul’s letters and the Gospel of John. Christian mysticism as a system, however, had arisen from Neoplatonism through the writings of Dionysius the Areopagite, or Pseudo-Dionysius. The 9th-century Scholastic philosopher John Scotus Erigena translated the works of Pseudo-Dionysius from Greek into Latin and thus introduced the mystical theology of Eastern Christianity into Western Europe, where it was combined with the mysticism of the early Christian prelate and theologian Saint Augustine.

In the Middle Ages mysticism was often associated with monasticism. Many celebrated mystics are found among the monks of both the Eastern church and the Western church, particularly the 14th-century Hesychasts of Mount Athos in the former, and Saints Bernard of Clairvaux, Francis of Assisi, and John of the Cross in the latter. The French monastery of Saint Victor, near Paris, was an important centre of mystical thought in the 12th century. The renowned mystic and Scholastic philosopher Saint Bonaventure was a disciple of the monks of St. Victor and St. Francis, who derived mysticism directly from the New Testament, without reference to Neoplatonism, remains a dominantly deliberated figure in modern mysticism. Among the mystics of Holland were Jan van Ruysbroeck and Gerhard Groote, the latter a religious reformer and founder of the monastic order known as the Brothers of the Common Life. Johannes Eckhart, called Meister Eckhart, was the foremost mystic of Germany.

Written by an anonymous English monk in the late 14th century, ‘The Cloud of Unknowing’ has been deeply influential in Christian mysticism. The author stressed the need for contemplation to understand and know God, with the goal of experiencing the spiritual touch of God, and perhaps even achieving a type of spiritual union with God here on earth. Encouraging the faithful to meditate as a way of prayer, putting everything but God out of their minds, even if, at first, all they are aware of is a cloud of unknowing.

Other important German mystics are Johannes Tauler and Heinrich who were followers of Eckhart and members of a group called the Friends of God. One of this group wrote the German Theology that influenced Martin Luther. Prominent later figures are to include, Thomas à Kempis, generally regarded as the author of The Imitation of Christ. English mystics of the 14th and 15th centuries include Margery Kempe and Richard Rolle, Walter Hilton, Julian of Norwich, and the anonymous author of The Cloud of Unknowing, an influential treatise on mystic prayer.

Several distinguished Christian mystics have been women, notably Hildegard of Bingen, Saint Catherine of Siena, and Saint Teresa of Ávila. The 17th-century French mystic Jeanne Marie Bouvier de la Motte Guyon delivered a naturalized mystical doctrine of quietism to France.

Sixteenth-century Spanish mystic and religious reformer Saint Teresa of Ávila’s books on prayer and contemplation frequently dealt with her intense visions of God. Her autobiography, The Life of Saint Teresa of Ávila, written in the 1560s, is frank and unsophisticated in style, and its vocabulary and theology is accessible to the everyday reader. Through this, Teresa described the physical and spiritual sensations that accompanied her religious raptures.

By its pursuit of spiritual freedom, sometimes at the expense of theological formulas and ecclesiastical discipline, mysticism may have contributed to the origin of the Reformation, although it inevitably disagreed with Protestant, as it had with Roman Catholic, religious authorities. The Counter Reformation inspired the Spiritual Exercises of Saint Ignatius of Loyola. The Practice of the Presence of God by Brother Lawrence was a classic French work of a later date. The most notable German Protestant mystics were Jakob Boehme, author of Mysterium Magnum (The Great Mystery), and Kaspar Schwenkfeld. Mysticism finds expression in the theology of many Protestant denominations and is a salient characteristic of such sects as the Anabaptists and the Quakers.

New England, Congregational divine, Jonathan Edwards, exhibited a strong mystical tendency, and the religious revivals that began in his time, and spread throughout the United States during the 19th century derived much of their peculiar power from the assumption of mystical principles, great emphasis being placed on heightened feeling as a direct intuition of the will of God. Mysticism manifested itself in England in the works of the 17th-century Cambridge Platonists: In those of devotional writer William Law, author of the Serious Call to a Devout and Holy Life, and in the Art and Poetry of William Blake.

Religious Revivals, by its term is widely used among Protestants since the early 18th century to denote periods of marked religious interest. Evangelistic preaching and prayer meetings, frequently accompanied by intense emotionalism, are characteristic of such periods, which are intended to renew the faith of church members and to bring others to profess their faith openly for the first time. By an extension of its meaning, the term is sometimes applied to various important religious movements of the past. Instances are recorded in the Scriptures as occurring both in the history of the Jews and in the early history of the Christian church. In the Middle Ages revivals took place concerning the Crusades and under the charge of the monastic orders, sometimes with strange adjuncts, as often happens with the Flagellants and the dancing mania. The Reformation of the 16th century was also accompanied by revivals of religion.

It is more accurate, however, to limit the application of the term revival to the history of modern Protestantism, especially in Britain and the United States where such movements have flourished with unusual vigour. The Methodist churches originated from a widespread evangelical movement in the first half of the 18th century. This was later called the Wesleyan movement or Wesleyan revival. The Great Awakening was the common designation for the revival of 1740-42 that took place in New England and other parts of North America under the Congregational clergyman Joseph Bellamy, and three Presbyterian clergymen, Gilbert Tennent, William Tennent, and their father, the educator William Tennent. Both Princeton University and Dartmouth College had their origin in this movement. Toward the end of the 18th century a fresh series of revivals began in America, lasting intermittently from 1797 to 1859. In New England the beginning of this long period was called the evangelical reawakening.

Churches soon came to depend upon revivals for their growth and even for their existence, and, as time went on, the work was also taken up by itinerant preachers also called circuit riders. The early years of the 19th century were marked by great missionary zeal, extending even to foreign lands. In Tennessee and Kentucky, encampment conventions, great open-air assemblies, began about 1800AD to play an important part in the evangelical work of the Methodist Church, now the United Methodist Church. One of the most notable products of the camp meeting idea was the late 19th-century Chautauqua Assembly, a highly successful educational endeavour. An outstanding religious revival of the 19th century was the Oxford movement (1833-45) in the Church of England, which resulted in the modern English High Church movement. Distinctly a revival, it was of a type different from those of the two preceding centuries. The great American revival of 1859-61 began in New England, particularly in Connecticut and Massachusetts, and extended to New York and other states. It is believed that in a single year half a million converts were received into the churches. Another remarkable revival, in 1874-75, originated in the labours of the American evangelists Dwight L. Moody and Ira D. Sankey. Organized evangelistic campaigns have sometimes had great success under the leadership of professional evangelists, among them Billy Sunday, Aimee Semple McPherson, and Billy Graham. The Salvation Army carries on its work largely by revivalistic methods.

American religious writer and poet Thomas Merton joined a monastery in 1941 and was later ordained as a Roman Catholic priest. He is known for his autobiography, The Seven Storey Mountains, which was published in 1948.

The 20th century has experienced a revival of interest in both Christian and non-Christian mysticism. Early commentators of note were Austrian Roman Catholic Baron Friedrich von Hügel, British poet and writer Evelyn Underhill, American Quaker Rufus Jones, the Anglican prelate William Inge, and German theologian Rudolf Otto. A prominent nonclerical commentator was American psychologist and philosopher William James in The Varieties of Religious Experience (1902).

At the turn of the century, American psychologist and philosopher William James gave a series of lectures on religion at Scotland’s University of Edinburgh. In the twenty lectures he delivered between 1901 and 1902, published together as The Varieties of Religious Experience (1902), James discussed such topics as the existence of God, religious conversions, and immortality. In his lectures on mysticism. James defined the characteristics of a mystical experience - a state of consciousness in which God is directly experienced. He also quoted accounts of mystical experiences as given by important religious figures from many different religious traditions.

In non-Christian traditions, the leading commentator on Zen Buddhism was Japanese scholar Daisetz Suzuki; on Hinduism, Indian philosopher Sarvepalli Radhakrishnan; and on Islam, British scholar R. A. Nicholson. The last half of the 20th century saw increased interest in Eastern mysticism. The mystical strain in Judaism, which received particular emphasis in the writings of the Kabbalists of the Middle Ages and in the Hasidism movement of the 18th century, was again pointed up by the modern Austrian philosopher and scholar Martin Buber. Mid-20th-century mystics of note included French social philosopher Simone Weil, French philosopher Pierre Teilhard de Chardin, and American Trappist monk Thomas Merton.

Comte chose the word positivism on the ground that it showed the “reality” and “constructive tendency” that he claimed for the theoretical aspect of the doctrine. He was, in the main, interested in a reorganization of social life for the good of humanity through scientific knowledge, and thus controls of natural forces. The two primary components of positivism, the philosophy and the polity (or a program of individual and social conduct), were later welded by Comte into a whole under the conception of a religion, in which humanity was the object of worship. Many of Comte's disciples refused, however, to accept this religious development of his philosophy, because it seemed to contradict the original positivist philosophy. Many of Comte's doctrines were later adapted and developed by the British social philosophers John Stuart Mill and Herbert Spencer and by the Austrian philosopher and physicist Ernst Mach.

In the early 20th century British mathematician and philosopher Bertrand Russell, along with British mathematician and philosopher Alfred North Whitehead, attempted to prove that mathematics and numbers can be understood as groups of concepts, or set classifications. Russell and Whitehead tried to show that mathematics is closely related to logic and, in turn, that ordinary sentences can be logically analysed using mathematical symbols for words and phrases. This idea resulted in a new symbolic language, used by Russell in a field he termed philosophical logic, in which philosophical propositions were reformulated and examined according to his symbolic logic.

During the early 20th century a group of philosophers who were concerned with developments in modern science rejected the traditional positivist ideas that held personal experience to be the basis of true knowledge and emphasized the importance of scientific verification. This group became known as logical positivists, and it included the Austrian Ludwig Wittgenstein and Bertrand Russell and G.E. Moore. It was Wittgenstein's Tractatus Logico-philosophicus (1921, German-English parallels texts, 1922) that proved to be of a decisive influence in the rejection of metaphysical doctrines for their meaninglessness and the acceptance of empiricism as a matter of logical necessity.

Philosophy, for Moore, was basically a two-fold activity. The first part involves analysis, that is, the attempt to clarify puzzling propositions or concepts by indicating fewer puzzling propositions or concepts to which the originals are held to be logically equivalent. Moore was perplexed, for example, by the claim of some philosophers that time is unreal. In analysing this assertion, he maintained that the proposition “time is unreal” was logically equivalent, as, “there are no temporal facts.” (“I read the article yesterday” is an example of a temporal fact.) Once the meaning of an assertion containing the problematic concept is clarified, the second task is to determine whether justifying reasons exist for believing the assertion. Moore's diligent attention to conceptual analysis for achieving clarity established him as one of the founders of the contemporary analytic and linguistic emphasis in philosophy.

Moore's most famous work, Principia Ethica (1903), contains his claim that the concept of good refers to a simple, unanalyzable, indefinable quality of things and situations. It is a nonnatural quality, for it is apprehended not by sense experience but by a kind of moral intuition. The quality goodness is evident, argued Moore, in such experiences as friendship and aesthetic enjoyment. The moral concepts of right and duty are then analysed as to producing whatever possesses goodness.

Several of Moore's essays, including “The Refutation of Idealism” (1903), contributed to developments in modern philosophical realism. An empiricist in his approach to knowledge, he did not identify experience with sense experience, and he avoided the skepticism that often accompanies empiricism. He came to the defence of the common-sense point of view that suggests that an experience result in knowledge of an external world independent of the mind.

Moore also wrote Ethics (1912), Philosophical Studies (1922), and Philosophical Papers (1959) and edited (1921-47) Mind, a leading British philosophical journal.

Nonetheless, language, Wittgenstein argued in the Tractatus, is composed of complex propositions that can be analysed into fewer complex propositions until one arrives at simple or elementary propositions. Correspondingly, the world is composed of complex facts that can be analysed into fewer complex facts until one arrives at simple, or atomic, facts. The world is the totality of these facts. According to Wittgenstein’s picture theory of meaning, it is the nature of elementary propositions logically to picture atomic facts, or ‘states of affairs’. He claimed that the nature of language required elementary propositions, and his theory of meaning required that there be atomic facts pictured by the elementary propositions. On this analysis, only propositions that picture facts - the propositions of science are considered cognitively meaningful. Metaphysical and ethical statements are not meaningful assertions. The logical positivists associated with the Vienna Circle were greatly influenced by this conclusion.

Wittgenstein came to believe, however, that the narrow view of language reflected in the Tractatus was mistaken. In the Philosophical Investigations he argued that if one looks to see how language is used, the variety of linguistic usage becomes clear. Words are like tools, and just as tools serve different dynamic functions, so linguistic expressions serve many foundational functional structures as bound akin the stability of fundamental linguistics. Although some propositions are used to picture facts, others are used to command, question, pray, thank, curse, and so on. This recognition of linguistic flexibility and variety led to Wittgenstein’s concept of a language game and to the conclusion that people play different language games. The scientist, for example, is involved in a different language game than the theologian. Moreover, the meaning of a proposition must be understood concerning its context, that is, for the rules of the game of which that proposition is a part. The key to the resolution of philosophical puzzles is the therapeutic process of examining and describing language in use.

The positivists today, who have rejected this so-called Vienna school of philosophy, prefer to call themselves logical empiricists to dissociate themselves from the emphasis of the earlier thinkers on scientific verification. They maintain that the verification principle it is philosophically unverifiable positivism, is a contained system of philosophy based on experience and empirical knowledge of natural phenomena, in which metaphysics and theology are regarded as inadequate and imperfect systems of knowledge.

Positivism is the system of philosophy based on experience and empirical knowledge of natural phenomena, in which metaphysics and theology are regarded as inadequate and imperfect systems of knowledge.

The doctrine was first called positivism by the 19th-century French mathematician and philosopher Auguste Comte, but some positivist ideas may be traced to the British philosopher David Hume, the French philosopher Duc de Saint-Simon, and the German philosopher Immanuel Kant.

Several major viewpoints were combined in the work of Kant, who developed a distinctive critical philosophy called transcendentalism. His philosophy is agnostic in that it denies the possibility of a strict knowledge of ultimate reality; it is empirical in that it affirms that all knowledge arises from experience and is true of objects of actual and possible experience; and it is rationalistic in that it maintains the theoretical character of the structural principles of this empirical knowledge.

Although, principles determine all experience, they do not in any way affect the nature of things in themselves. The knowledge of which these principles are the necessary conditions must not be considered, therefore, as constituting a revelation of things as they are in themselves. This knowledge concerns things only insofar as they appear to human perception or as they can be apprehended by the senses. The argument by which Kant sought to fix the limits of human knowledge within the framework of experience and to demonstrate the inability of the human mind to penetrate beyond experience strictly by knowledge to the realm of ultimate reality constitutes the critical feature of his philosophy. Kant sought also to reconcile science and religion in a world of two levels, comprising noumena, objects conceived by reason although not perceived by the senses, and phenomena, things as they appear to the senses and are accessible to material study. He maintained that, because God, freedom, and human immortality are noumenal realities, these concepts are understood through moral faith than through scientific knowledge. With the continuous development of science, the expansion of metaphysics to include scientific knowledge and methods became one of the major objectives of metaphysicians.

All the same, in that, these theoretical principles as were structurally given are those contained or restricted by their measure through which we discover that the philosopher John Locke (1632-1704), was that he founded the school of empiricism. Under which of his understanding, Locke explained his theory of empiricism, a philosophical doctrine holding that all knowledge is based on experience, in An Essay Concerning Human Understanding (1690). Locke believed the human mind to be a blank slate at birth that gathered all its information from its surroundings - starting with simple ideas and combining these simple ideas into more complex ones. His theory greatly influenced education in Great Britain and the United States. Locke believed that education should begin in early childhood and should proceed gradually as the child learns increasingly complex ideas.

Locke was born in the village of Wrington, Somerset, on August 29, 1632. He was educated at the University of Oxford and lectured on Greek, rhetoric, and moral philosophy at Oxford from 1661 to 1664. In 1667 Locke began his association with the English statesman Anthony Ashley Cooper, 1st earl of Shaftesbury, to whom Locke was friend, adviser, and physician. Shaftesbury secured for Locke a series of minor government appointments. In 1669, in one of his official capacities, Locke wrote a constitution for the proprietors of the Carolina Colony in North America, but it was never put into effect. In 1675, after the liberal Shaftesbury had fallen from favour, Locke went to France. In 1679 he returned to England, but in view of his opposition to the Roman Catholicism favoured by the English monarchy at that time, he soon found it expedient to return to the Continent. From 1683 to 1688 he lived in Holland, and following the so-called Glorious Revolution of 1688 and the restoration of Protestantism to favour, Locke returned once more to England. The new king, William III, appointed Locke to the Board of Trade in 1696, a position from which he resigned because of ill health in 1700. He died in Oates on October 28, 1704.

The ideas of 17th-century English philosopher and political theorists John Locke greatly influenced modern philosophy and political thought. Locke, who is best known for establishing the philosophical doctrine of empiricism, was criticized for his “atheistic” proposition that morality is not innate within human beings. However, Locke was a religious man, and the influence of his faith was overlooked by his contemporaries and subsequent readers. Author John Dunn explores the influence of Locke’s Anglican beliefs on works such as An Essay Concerning Human Understanding (1690).

Locke's empiricism emphasizes the importance of the experience of the senses in pursuit of knowledge than intuitive speculation or deduction. The empiricist doctrine was first expounded by the English philosopher and statesman Francis Bacon early in the 17th century, but Locke gave it systematic expression in his Essay Concerning Human Understanding (1690). He regarded the mind of a person at birth as a tabula rasa, a blank slate upon which experience imprinted knowledge, and did not believe in intuition or theories of innate conceptions. Locke also held that all persons are born good, independent, and equal.

English philosopher John Locke anonymously published his Treatises on Government (1690) the same year as his famous Essay Concerning Human Understanding. In the Second Treatise, Locke described his concept of a ‘civil government’. Locke excluded absolute monarchy from his definition of civil society, because he believed that the people must consent to be ruled. This argument later influenced the authors of the Declaration of Independence and the Constitution of the United States.

Locke's views, in his Two Treatises of Government (1690), attacked the theory of divine right of kings and the nature of the state as conceived by the English philosopher and political theorist Thomas Hobbes. In brief, Locke argued that sovereignty did not reside in the state but with the people, and that the state is supreme, but only if it is bound by civil and what he called ‘natural’ law. Many of Locke's political ideas, such as that relating to natural rights, property rights, the duty of the government to protect these rights, and the rule of the majority, were later embodied in the U.S. Constitution.

Locke further held that revolution was not only a right but often an obligation, and he advocated a system of checks and balances in government. He also believed in religious freedom and in the separation of church and state.

Locke's influence in modern philosophy has been profound and, with his application of empirical analysis to ethics, politics, and religion, he remains one of the most important and controversial philosophers of all time. Among his other works are Some Thoughts Concerning Education (1693) and The Reasonableness of Christianity (1695).

In accord with empirical knowledge it is found that pragmatism, is an aligned to a philosophical movement that has had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notion that there is absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in understanding and practicality and an equally American distrust of abstract theories and ideologies.

American psychologist and philosopher William James helped to popularize the philosophy of pragmatism with his book Pragmatism: A New Name for Old Ways of Thinking (1907). Influenced by a theory of meaning and verification developed for scientific hypotheses by American philosopher C.S. Peirce, James held that truth is what work, or has good experimental results. In a related theory, James argued the existence of God is partly verifiable because many people derive benefits from believing.

The Association for International Conciliation first published William James’s pacifist statement, “The Moral Equivalent of War,” in 1910. James, a highly respected philosopher and psychologist, was one of the founders of pragmatism of which was a philosophical movement holding that ideas and theories must be tested in practice to assess their worth. James hoped to find a way to convince men with a long-standing history of pride and glory in war to evolve beyond the need for bloodshed and to develop other avenues for conflict resolution. Spelling and grammar represents standards of the time.

Pragmatists regarded all theories and institutions as tentative hypotheses and solutions, and for this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.

Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalisms, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behaviour. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.

The ideas of the pragmatists, moreover, were considered revolutionary when they first appeared. To some critics, pragmatism’s refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather than these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists’ denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.

Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested to many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.

The three most important pragmatists are American philosopher’s Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; His objective was to infuse scientific thinking into philosophy and society, and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning - in particular, the meaning of concepts used in science. The meaning of the concept ‘brittle’, for example, is given by the observed consequences or properties that objects called ‘brittle’ exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivists, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivists emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.

James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce’s doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic, is crucial to rationality and that the great issues of life-morality and religious belief, for example, - are leaps of faith. As such, they depend upon what he called ‘the will to believe’ and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist - someone who believes the world to be far too complex for anyone philosophy to explain everything.

Dewey’s philosophy can be described as a foundational version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world-view in which individuals and societies are progressing. In consequence, he felt that these traditional ideas must be discarded or revised. For pragmatists, everything people know and do depend on a historical context and are thus tentative rather than absolute.

Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey’s writings, although he aspired to synthesize the two realms.

The pragmatists’ tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists - Pierce, James, and Dewey - as an alternative to Rorty’s interpretation of the tradition.

In an ever-changing world, pragmatism has many benefits. It defends social experimentation as a means of improving society, accepts pluralism, and reject’s dead dogmas. But a philosophy that offers no final answers or absolutes and that appears vague as a result of trying to harmonize opposites may also be unsatisfactory to some.

It may prove fitting to turn tables of a direction that inclines by inclination some understanding of Kant's most distinguished followers, notably Johann Gottlieb Fichte, Friedrich Schelling, Georg Wilhelm Friedrich Hegel, and Friedrich Schleiermacher, who negated Kant's criticism in their elaborations of his transcendental metaphysics by denying the Kantian conception of the thing-in-itself. They thus developed an absolute idealism opposing Kant's critical transcendentalism.

Since the formation of the hypothesis of absolute idealism, the development of metaphysics has resulted in as many types of metaphysical theory as existed in pre-Kantian philosophy, despite Kant's contention that he had fixed definitely the limits of philosophical speculation. Phenomenalism, as it is exemplified in the writings of the French philosopher Auguste Comte and the British philosopher Herbert Spencer; Emergent evolution, or creative evolution, originated by the French philosopher Henri Bergson. The philosophy of the organism, elaborated by Alfred North Whitehead. The salient doctrines of pragmatism are that the chief function of thought is to guide action, that the meaning of concepts is to be sought in their practical applications, and that truth should be tested by the practical effects of belief; According to instrumentalism, ideas are instruments of action, and their truth is determined by their role in human experience. In the teachings of voluntarism may obtainably presuppose that Will is theoretically equal to postulates as they are the supreme manifestation of reality. The exponents of phenomenalism, who are sometimes called positivists, contend that everything can be analysed as to actual or possible occurrences, or phenomena, and that anything that cannot be analysed in this manner cannot be understood. In emergent or creative evolution, the evolutionary process is characterized as spontaneous and unpredictable than mechanistically determined. The philosophy of the organism combines an evolutionary stress on constant process with a metaphysical theory of God, the eternal objects, and intuitive creativity.

Comte chose the word positivism on the ground that it suggested the ‘reality’ and ‘constructive tendency’ that he claimed for the theoretical aspect of the doctrine. He was, in the main, interested in a reorganization of social life for the good of humanity through scientific knowledge, and thus controls of natural forces. The two primary components of positivism, the philosophy and the polity (or a program of individual and social conduct), were later welded by Comte into a whole under the conception of a religion, in which humanity was the object of worship.

In response to the scientific, political, and industrial revolution of his day, Comte was fundamentally concerned with an intellectual, moral, and political reorganization of the social order. Adoption of the scientific attitude was the key, he thought, to such a reconstruction.

Comte, also, argued that an empirical study of historical processes, particularly of the progress of the various interrelated sciences, reveals a law of three stages that govern human development. He analysed these stages in his major work, the six-volume Course of Positive Philosophy (1830-42, which was translated by 1853). Because of the nature of the human mind, each science or branch of knowledge passes through “three different theoretical states: the theological or fictitious state; The metaphysical or abstract state; and, lastly, the scientific or positive state.” At the theological stage, events are immaturely explained by appealing to the will of the gods or of God. At the metaphysical stage phenomena are explained by appealing to abstract philosophical categories. The final evolutionary stage, the scientific, involves relinquishing any quest for absolute explanations of causes. Attention is focussed altogether on how phenomena are related, with the aim of arriving at generalizations subject to observational verification. Comte's work is considered as the classical expression of the positivist attitude - namely, that the empirical sciences are the only adequate source of knowledge.

Although Kant rejected belief in a transcendent being, Comte recognized the value of religion in contributing to social stability. In his four-volume System of Positive Polity, 1851-54 and translated, 1875-77, he proposed his religion of humanity, aimed as the presentation to socially beneficial behaviour. Comte's chief significance, however, derives from his role in the historical development of positivism.

Wittgenstein’s philosophical life may be divided into two distinct phases: an early period, represented by the Tractatus, and a later period, represented by the Philosophical Investigations. Throughout most of his life, however, Wittgenstein consistently viewed philosophy as linguistic or a conceptual analysis. In the Tractatus he argued that “philosophy aims at the logical clarification of thoughts.” In the Philosophical Investigations, however, he maintained that “philosophy is a battle against the bewitchment of our intelligence by means of language.”

This recognition of linguistic flexibility and variety led to Wittgenstein’s concept of a language game and to the conclusion that people play different language games. The scientist, for example, is involved in a different language game than the theologian. Moreover, the meaning of a proposition must be understood in terms of its context, that is, about the rules of the game of which that proposition is a part. The key to the resolution of philosophical puzzles is the therapeutic process of examining and describing language in use.

During the early 20th century a group of philosophers who were concerned with developments in modern science rejected the traditional positivist ideas that held personal experience to be the basis of true knowledge and emphasized the importance of scientific verification. This group became known as logical positivists, and it included the Austrian Ludwig Wittgenstein and the British Bertrand Russell and G.E. Moore. It was Wittgenstein's Tractatus Logico-philosophicus, 1921; German-English parallels text, 1922, that proved to be of decisive influence in the rejection of metaphysical doctrines for their meaninglessness and the acceptance of empiricism as a matter

The positivists today, who have rejected this so-called Vienna school of philosophy, prefer to call themselves logical empiricists to dissociate themselves from the emphasis of the earlier thinkers on scientific verification. They maintain that the verification principle itself is philosophically unverifiable.

Edmund Husserl inherited his view from Brentano, that the central problem in understanding thought is that of explaining the way in which an intentional direction, or content, can belong to the mental phenomenon that exhibits it. What Husserl discovered when he contemplated the content of his mind were such acts as remembering, desiring, and perceiving, besides the abstract content of these acts, which Husserl called meanings. These meanings, he claimed, enabled an act to be directed toward an object under a certain aspect. Such directedness, called intentionality, he held to be the essence of consciousness. Transcendental phenomenology, according to Husserl, was the study of the basic components of the meanings that make intentionality possible. After, the Méditations Cartésiennes (1931, Cartesian Meditations, 1960), he introduced genetic phenomenology, which he defined as the study of how these meanings are built up in the course of experience.

Edmund Husserl is considered the founder of phenomenology. This 20th-century philosophical movement is dedicated to the description of phenomena as they present themselves through perception to the conscious mind.

Edmund Husserl, introduced the term in his book Ideen zu einer reinen Phänomenolgie und phänomenologischen Philosophie, 1913 Ideas: A General Introduction to Pure Phenomenology, 1931. Early followers of Husserl such as German philosopher Max Scheler, was influenced by his previous book, Logische Untersuchungen, two volumes, 1900 and 1901, Logical Investigations, 1970, claimed that the task of phenomenology is to study essences, such as the essence of emotions. Although Husserl himself never gave up his early interest in essences, he later held that only the essences of certain special conscious structural foundations are the proper Objectifies of phenomenology. As formulated by Husserl after 1910, phenomenology is the study of the structures of consciousness that enable consciousness to refer to objects outside itself. This study requires reflection on the content of the mind to the exclusion of everything else. Husserl called this type of reflection the phenomenological reduction. Because the mind can be directed toward nonexistent with real objects, Husserl recognized that phenomenological reflection does not really presuppose that of anything that exists, but amounts to a ‘bracketing of existence’- that is, setting aside the question of the real existence of the meditated objective.

Husserl argued against his early position, which he called psychologies, in Logical Investigations, 1900-1901 and translated, 1970. In this book, regarded as a radical departure in philosophy, he contended that the philosopher's task is to contemplate the essences of things, and that the essence of an object can be arrived at by systematically varying that object in the imagination. Husserl noted that consciousness is always directed toward something. He called this directedness intentionality and argued that consciousness contains ideal, unchanging structures called meanings, which determine what object the mind is directed toward at any given time.

During his tenure (1901-1916) at the University of Göttingen, Husserl attracted many students, who began to form a distinct phenomenological school, and he wrote his most influential work, Ideas: A General Introduction to Pure Phenomenology, 1913; and translated 1931. In this book Husserl introduced the term phenomenological reduction for his method of reflection on the meanings the mind employs when it contemplates an object. Because this method concentrates on meanings that are in the mind, whether or not the object present to consciousness actually exists, he proceeded to give detailed analyses of the mental structures involved in perceiving particular types of objects, describing in detail, for instance, his perception of the apple tree in his garden. Thus, although phenomenology does not assume the existence of anything, it is nonetheless a descriptive discipline; according to Husserl, phenomenology is devoted, not to inventing theories, but rather to describing the “things themselves.”

After 1916 Husserl taught at the University of Freiburg. Phenomenology had been criticized as an essentially solipistic method, confining the philosopher to the contemplation of private meanings, so in Cartesian Meditations, 1931 and translated, 1960, Husserl attempted to show how the individual consciousness can be directed toward other minds, society, and history. Husserl died in Freiburg on April 26, 1938.

Husserl's phenomenology had a great influence on a younger colleague at Freiburg, Martin Heidegger, who developed existential phenomenology, and Jean-Paul Sartre and French existentialism. Phenomenology remains one of the most vigorous tendencies in contemporary philosophy, and its impact has also been felt in theology, linguistics, psychology, and the social sciences.

What is more, Husserl discovered when he contemplated the content of his mind were such acts as remembering, desiring, and perceiving, beyond the abstract content of these acts, which Husserl called meanings. These meanings, he claimed, enabled an act to be directed toward an object under a certain aspect. Such directedness, called intentionality, he held to be the essence of consciousness. Transcendental phenomenology, according to Husserl, was the study of the basic components of the meanings that make intentionality possible. Successively, in Méditations Cartésiennes (1931, Cartesian Meditations, 1960), he introduced genetic phenomenology, which he defined as the study of how these meanings are built up in the course of experience.

Phenomenology attempts to describe reality as for pure experience by suspending all beliefs and assumptions about the world. Though first defined as descriptive psychology, phenomenological attempts in philosophical than psychological investigations into the nature of human beings. Influenced by his colleague Edmund Husserl, and German philosopher Martin Heidegger published Sein und Zeit (Being and Time) in 1927, an effort to describe the phenomenon of being by considering the full scope of existence.

All phenomenologists follow Husserl in attempting to use pure description. Thus, they all subscribe to Husserl's slogan ‘To the things themselves’. They differ among themselves, however, whether the phenomenological reduction can be realized, and what is manifest to the philosopher as giving a pure description of experience. Martin Heidegger, Husserl's colleague and most brilliant of critics, claimed that phenomenology must necessitate the essential manifestations what is hidden or perhaps underlying to cause among the

ordinary, everyday experience. He therefore, endeavoured within Being and Time, to describe what he called the structure of everydayness, or being-in-the-world, which he found an interconnected system of equipment, social roles, and purposes.

Martin Heidegger strongly influenced the development of the 20th-century philosophical school of existential phenomenology, which examines the relationship between phenomena and individual consciousness. His inquiries into the meaning of ‘authentic’ or ‘inauthentic’ existence greatly influenced a broad range of thinkers, including French existentialist Jean-Paul Sartre. Author Michael Inwood explores Heidegger’s key concept of Dasein, or “Being,” which was first expounded in his major work Being and Time.

Accountably of Heidegger, one is what one does in the world, a phenomenological reduction to one's own private experience is impossible. Because human action consists of a direct grasp of objects, positing a special mental entity called a meaning to account for intentionality is not necessary. For Heidegger, being thrown into the world among things in the act of realizing projects is a more fundamental kind of intentionality than that revealed in merely staring at or thinking about objects, and it is this more fundamental intentionality that makes possible the directedness analysed by Husserl.

In the mid-1900s, French existentialist Jean-Paul Sartre attempted to adapt Heidegger's phenomenology to the philosophy of consciousness, in effect returning to the approach of Husserl. Sartre agreed with Husserl that consciousness is always directed at objects but criticized his claim that such directedness is possible only by means of special mental entities called meanings. The French philosopher Maurice Merleau-Ponty rejected Sartre's view that phenomenological description reveals human beings to be pure, isolated, and free consciousness. He stressed the role of the active, involved body in all human knowledge, thus generalizing Heidegger's insights to include the analysis of perception. Like Heidegger and Sartre, Merleau-Ponty is an existential phenomenologists, in that he denies the possibility of bracketing existence.

Phenomenology has had a pervasive influence on 20th-century thought. Phenomenological versions of theology, sociology, psychology, psychiatry, and literary criticism have been developed, and phenomenology remains one of the most important schools of contemporary philosophy.

Phenomenology attempts to describe reality as for pure experience by suspending all beliefs and assumptions about the world. Though first defined as descriptive psychology, phenomenology attemptively afforded through the efforts established in philosophical than psychological investigations into the nature of human beings. Influenced by his colleague Edmund Husserl (known as the founder of phenomenology),

Husserl's colleague and most brilliant of critics, claimed that phenomenology ought be effectually manifested in what is hidden in ordinary, everyday experience. He thus attempted in Being and Time, to describe what he called the structure of everydayness, or being-in-the-world, which he found an interconnected system of equipment, social roles, and purposes.

German philosopher Martin Heidegger strongly influenced the development of the 20th-century philosophical school of existential phenomenology, which examines the relationship between phenomena and individual consciousness. His inquiries into the meaning of ‘authentic’ or ‘inauthentic’ existence greatly influenced a broad range of thinkers, including French existentialist Jean-Paul Sartre. Author Michael Inwood explores Heidegger’s key concept of Dasein, or ‘Being’, which was first expounded in his major work Being and Time.

Besides Husserl, Heidegger was especially influenced by the pre-Socratics, by Danish philosopher Søren Kierkegaard, and by German philosopher Friedrich Nietzsche. In developing his theories, Heidegger rejected traditional philosophic terminology in favour of an individual interpretation of the works of past thinkers. He applied original meanings and etymologies to individual words and expressions, and coined hundreds of new, complex words. Heidegger was concerned with what he considered the essential philosophical question: What is it, to be? This led to the question of what kind of ‘Being’ human beings have. They are, he said, thrown into a world that they have not made but that consists of potentially useful things, including cultural and natural objects. Because these objects come to humanity from the past and are used in the present for the sake of future goals, Heidegger posited a fundamental relation between the mode of being of objects, of humanity, and of the structure of time.

The individual is, however, always in danger of being submerged in the world of objects, everyday routine, and the conventional, shallow behaviour of the crowd. The feeling of dread (Angst) brings the individual to a confrontation with death and the ultimate meaninglessness of life, but only in this confrontation can an authentic sense of Being and of freedom be attained.

After 1930, Heidegger turned, in such works as Einführung in die Metaphysik (An Introduction to Metaphysics, 1953), to the interpretation of particular Western conceptions of Being. He felt that, in contrast to the reverent ancient Greek conception of being, modern technological society has fostered a purely manipulative attitude that has deprived Being and human life of meaning - a condition he called nihilism. Humanity has forgotten its true vocation and must recover the deeper understanding of Being (achieved by the early Greeks and lost by subsequent philosophers) to be receptive to new understandings of Being.

Heidegger's original treatment of such themes as human finitude, death, nothingness, and authenticity led many observers to associate him with existentialism, and his work had a crucial influence on French existentialist Jean-Paul Sartre. Heidegger, however, eventually repudiated existentialist interpretations of his work. His thought directly influenced the work of French philosophers’ Michel Foucault and Jacques Derrida and of German sociologist Jurgen Habermas. Since the 1960s his influence has spread beyond continental Europe and has had an increasing impact on philosophy in English-speaking countries worldwide.

Because, for Heidegger, one is what one does in the world, a phenomenological reduction to one's own private experience is impossible. Because human action consists of a direct grasp of objects, positing a special mental entity called a meaning to account for intentionality is not necessary. For Heidegger, being given off into the world among things in the act of realizing projects is a more fundamental kind of intentionality than that revealed in merely staring at or thinking about objects, and it is this more fundamental intentionality that makes possible the directedness analysed by Husserl.

Like Heidegger and Sartre, Merleau-Ponty Maurice (1908-1961), A French existentialist philosopher, whose phenomenological studies of the role of the body in perception and society opened a new field of philosophical investigation. He taught at the University of Lyon, at Sorbonne, and, after 1952, at the Collège de France. His first important work was The Structure of Comportment, 1942 translated, 1963, an interpretative analysis of behaviourism. His major work, Phenomenology of Perception, 1945 and translated 1962, is a detailed study of perception, influenced by the German philosopher Edmund Husserl's phenomenology and by Gestalt psychology. In it, he argues that science presupposes an original and unique perceptual relation to the world that cannot be explained or even described in scientific terms. This book can be viewed as a critique of cognitivism -the view that the working of the human mind can be understood under rules or programs. It is also a telling on the critique of the existentialism of his contemporary, Jean-Paul Sartre, showing how human freedom is never total, as Sartre claimed, but is limited by our characterization.

Born in Vienna on April 26, 1889, Wittgenstein was raised in a wealthy and cultured family. After attending schools in Linz and Berlin, he went to England to study engineering at the University of Manchester. His interest in pure mathematics led him to Trinity College, University of Cambridge, to study with Bertrand Russell. There he turned his attention to philosophy. By 1918 Wittgenstein had completed his Tractatus Logico-philosophicus, 1921 translated, 1922, a work he then believed provided the “solution” to philosophical problems. Subsequently, he turned from philosophy and for several years taught elementary school in an Austrian village. In 1929 he returned to Cambridge to resume his work in philosophy and was appointed to the faculty of Trinity College. Soon he began to reject certain conclusions of the Tractatus and to develop the position reflected in his Philosophical Investigations, published, Posthumously 1953, and translated 1953. Wittgenstein retired in 1947, he died in Cambridge on April 29, 1951. A sensitive, intense man who often sought solitude and was frequently depressed, Wittgenstein abhorred pretense and was noted for his simple style of life and dress. The philosopher was forceful and confident in personality, however, and he exerted considerable influence on those with whom he came in contact.

Wittgenstein’s philosophical life may be divided into two distinct phases: an early period, represented by the Tractatus, and a later period, represented by the Philosophical Investigations. Throughout most of his life, however, Wittgenstein consistently viewed philosophy as linguistic or conceptual analysis. In the Tractatus he argued that “philosophy aims at the logical clarification of thoughts.” In the Philosophical Investigations, however, he maintained that “philosophy is a battle against the bewitchment of our intelligence by means of language.”

Language, Wittgenstein argued in the Tractatus, is composed of complex propositions that can be analysed into fewer complex propositions until one arrives at simple or elementary propositions. Correspondingly, the world is composed of complex facts that can be analysed into fewer complex facts until one arrives at simple. The world is the totality of these facts. According to Wittgenstein’s picture theory of meaning, it is the nature of elementary propositions logically to picture atomic facts, or “states of affairs.” He claimed that the nature of language required elementary propositions, and his theory of meaning required that there be atomic facts pictured by the elementary propositions. On this analysis, only propositions that picture facts - the propositions of science - are considered cognitively meaningful. Metaphysical and ethical statements are not meaningful assertions. The logical positivists associated with the Vienna Circle were greatly influenced by this conclusion.

Wittgenstein came to believe, nonetheless, that the narrow view of language reflected in the Tractatus was mistaken. In the Philosophical Investigations he argued that if one looks to see how language is used, the variety of linguistic usage becomes clear. Although some propositions are used to picture facts, others are used to command, question, pray, thank, curse, and so on. This recognition of linguistic flexibility and variety led to Wittgenstein’s concept of a language game and to the conclusion that people play different language games. The scientist, for example, is involved in a different language game than the theologian. Moreover, the meaning of a proposition must be understood as to its context, that is, for the rules of the game of which that proposition is a part. The key to the resolution of philosophical puzzles is the therapeutic process of examining and describing language in use.

Phenomenology attempts to describe reality as pure experience by suspending all beliefs and assumptions about the world. Though first defined as descriptive psychology, phenomenology attempts of philosophical than psychological investigations into the nature of human beings. Influenced by his colleague Edmund Husserl and the German philosopher Martin Heidegger published Being and Time, in 1927, an effort to describe the phenomenon of being by considering the full scope of existence.



German philosopher Martin Heidegger strongly influenced the development of the 20th-century philosophical school of existential phenomenology, which examines the relationship between phenomena and individual consciousness. His inquiries into the meaning of ‘authentic’ or ‘inauthentic’ existence greatly influenced a broad range of thinkers.

Because, for Heidegger, one is what one does in the world, a phenomenological reduction to one's own private experience is impossible. Because human action consists of a direct grasp of objects, positing a special mental entity called a meaning to account for intentionality is not necessary. For Heidegger, being thrown into the world among things in the act of realizing projects is a more fundamental kind of intentionality than that revealed in merely staring at or thinking about objects, and it is this more fundamental intentionality that makes possible the directedness analysed by Husserl.

Consciousness, is the latest development of the organic and so what is most unfinished and a weakened state of affairs. It was in 1882, the year and publication of The Gay Science. Yet, the domination with which several times he spoke against antisemitism. Although overlooking Wagner’s antisemitism during the period in which he idealized him was easy for him, when Wagner gained wider public and his antisemitism became more intense did forcefully condemn him for it. In the Gay Science Nietzsche wrote that ‘Wagner is Schopenhauerian in his hatred of the Jews to whom he is not able to do justice even when it comes to their greatest deed, after all, the Jews are the inventors of Christianity; (Recognizing that is important while Nietzsche frequently attacked those forces that led the developments of Christianity and its destructive impact there is no simple condemning. Here Nietzsche is genuinely castigating Wagner, [and Schopenhauer] and recognizing this greatest deed of the Jews, the consequences may have been as deeply as the neurotic creature. Nevertheless, a creature who brought into the world something new and full of promise. In addition, as we can take to consider, in the words of Bernard Williams, ‘Nietzsche’s ever-present sense that his own consciousness would not be possible without the developments that he disliked’.

The problem with consciousness lies at work who of the scientists has long considered the nature of consciousness without producing a fully satisfactory definition. In the early 20th century American philosopher and psychologist William James suggested that consciousness be a mental process involving both attention to external stimuli and short-term memory. Later scientific explorations of consciousness mostly expanded upon Jame’s work. In an article dated from a 1997 special issue of Scientific American, Nobel laureate Francis Crick, who helped determine the structure of a DNA, and fellow biophysicist Christof Koch explains how experiments on vision might deepen our understanding of a sensible characterization of consciousness.

States of Consciousness., are no simple, agreed-upon definition of consciousness exists? Attempted definitions tend to be tautological (for example, consciousness defined as awareness) or merely descriptive (for example, consciousness described as sensations, thoughts, or feelings). Despite this problem of definition, the subject of consciousness has had a remarkable history. At one time the primary subject matter of psychology, consciousness as an area of study suffered an almost total demise, later reemerging to become a topic of current interest.

French thinker René Descartes applied rigorous scientific methods of deduction to his exploration of philosophical questions. Descartes is probably best known for his pioneering work in philosophical skepticism. Author Tom Sorell examines the concepts behind Descartes’s work Meditationes de Prima Philosophia (1641; Meditations on First Philosophy), focussing on its unconventional use of logic and the reactions it aroused.

Most of the philosophical discussions of consciousness arose from the mind-body issues posed by the French philosopher and mathematician René Descartes in the 17th century. Descartes asked: Is the mind, or consciousness, independent of matter? Is consciousness extended (physical) or unextended (nonphysical)? Is consciousness determinative, or is it determined? English philosophers such as John Locke equated consciousness with physical sensations and the information they provide, whereas European philosophers such as Gottfried Wilhelm Leibniz and Immanuel Kant gave a more central and active role to consciousness.

The philosopher who most directly influenced subsequent exploration of the subject of consciousness was the 19th-century German educator Johann Friedrich Herbart, who wrote that ideas had quality and intensity and that they may inhibit or facilitate one another. Thus, ideas may pass from “states of reality” (consciousness) to “states of tendency” (unconsciousness), with the dividing line between the two states being described as the threshold of consciousness. This formulation of Herbart clearly presages the development, by the German psychologist and physiologist Gustav Theodor Fechner, of the Psycho-physical measurement of sensation thresholds, and the later development by Sigmund Freud of the concept of the unconscious.

The experimental analysis of consciousness dates from 1879, when the German psychologist Wilhelm Max Wundt started his research laboratory. For Wundt, the task of psychology was the study of the structure of consciousness, which extended well beyond sensations and included feelings, images, memory, attention, duration, and movement. Because early interest focussed on the content and dynamics of consciousness, it is not surprising that the central methodology of such studies was introspection; that is, subjects reported on the mental contents of their own consciousness. This introspective approach was developed most fully by the American psychologist Edward Bradford Titchener at Cornell University. Setting his task as that of describing the structure of the mind, Titchener attempted to detail, from introspective self-reports, the dimensions of the elements of consciousness. For example, taste was “dimensionalized” into four basic categories: sweet, sour, salt, and bitter. This approach was known as structuralism.

By the 1920s, however, a remarkable revolution had occurred in psychology that was essentially to remove considerations of consciousness from psychological research for some 50 years: Behaviourism captured the field of psychology. The main initiator of this movement was the American psychologist John Broadus Watson. In a 1913 article, Watson stated, “I believe that we can write a psychology and never use the terms consciousness, mental states, mind . . . imagery and the like.” Psychologists then turned almost exclusively to behaviour, as described in terms of stimulus and response, and consciousness was totally bypassed as a subject. A survey of eight leading introductory psychology texts published between 1930 and the 1950s found no mention of the topic of consciousness in five texts, and in two it was treated as a historical curiosity.

Beginning in the late 1950s, however, interest in the subject of consciousness returned, specifically in those subjects and techniques relating to altered states of consciousness: sleep and dreams, meditation, biofeedback, hypnosis, and drug-induced states. Much of the surge in sleep and dream research was directly fuelled by a discovery relevant to the nature of consciousness. A physiological indicator of the dream state was found: At roughly 90-minute intervals, the eyes of sleepers were observed to move rapidly, and at the same time the sleepers' brain waves would show a pattern resembling the waking state. When people were awakened during these periods of rapid eye movement, they almost always reported dreams, whereas if awakened at other times they did not. This and other research clearly indicated that sleep, once considered a passive state, was instead an active state of consciousness (see Dreaming; Sleep).

During the 1960s, an increased search for “higher levels” of consciousness through meditation resulted in a growing interest in the practices of Zen Buddhism and Yoga from Eastern cultures. A full flowering of this movement in the United States was seen in the development of training programs, such as Transcendental Meditation, that were self-directed procedures of physical relaxation and focussed attention. Biofeedback techniques also were developed to bring body systems involving factors such as blood pressure or temperature under voluntary control by providing feedback from the body, so that subjects could learn to control their responses. For example, researchers found that persons could control their brain-wave patterns to some extent, particularly the so-called alpha rhythms generally associated with a relaxed, meditative state. This finding was especially relevant to those interested in consciousness and meditation, and a number of “alpha training” programs emerged.

Another subject that led to increased interest in altered states of consciousness was hypnosis, which involves a transfer of conscious control from the subject to another person. Hypnotism has had a long and intricate history in medicine and folklore and has been intensively studied by psychologists. Much has become known about the hypnotic state, relative to individual suggestibility and personality traits; the subject has now been largely demythologized, and the limitations of the hypnotic state are fairly well known. Despite the increasing use of hypnosis, however, much remains to be learned about this unusual state of focussed attention.

Finally, many people in the 1960s experimented with the psychoactive drugs known as hallucinogens, which produce disorders of consciousness. The most prominent of these drugs are lysergic acid diethylamide, or LSD; mescaline (see Peyote); and psilocybin; the latter two have long been associated with religious ceremonies in various cultures. LSD, because of its radical thought-modifying properties, was initially explored for its so-called mind-expanding potential and for its psychotomimetic effects (imitating psychoses). Little positive use, however, has been found for these drugs, and their use is highly restricted.

Scientists have long considered the nature of consciousness without producing a fully satisfactory definition. In the early 20th century American philosopher and psychologist William James suggested that consciousness is a mental process involving both attention to external stimuli and short-term memory. Later scientific explorations of consciousness mostly expanded upon James’s work. In this article from a 1997 special issue of Scientific American, Nobel laureate Francis Crick, who helped determine the structure of DNA, and fellow biophysicist Christof Koch explain how experiments on vision might deepen our understanding of consciousness.

The concept of a direct, simple linkage between environment and behaviour became unsatisfactory in recent decades, the interest in altered states of consciousness may be taken as a visible sign of renewed interest in the topic of consciousness. That persons are active and intervening participants in their behaviour has become increasingly clear. Environments, rewards, and punishments are not simply defined by their physical character. Memories are organized, not simply stored (see Memory). An entirely new area called cognitive psychology has emerged that centres on these concerns. In the study of children, increased attention is being paid to how they understand, or perceive, the world at different ages. In the field of animal behaviour, researchers increasingly emphasize the inherent characteristics resulting from the way a species has been shaped to respond adaptively to the environment. Humanistic psychologists, with a concern for self-actualization and growth, have emerged after a long period of silence. Throughout the development of clinical and industrial psychology, the conscious states of persons in terms of their current feelings and thoughts were of obvious importance. The role of consciousness, however, was often de-emphasised in favour of unconscious needs and motivations. Trends can be seen, however, toward a new emphasis on the nature of states of consciousness.

The overwhelming question in neurobiology today is the relation between the mind and the brain. Everyone agrees that what we know as mind is closely related to certain aspects of the behaviour of the brain, not to the heart, as Aristotle thought. Its most mysterious aspect is consciousness or awareness, which can take many forms, from the experience of pain to self-consciousness. In the past the mind (or soul) was often regarded, as it was by Descartes, as something immaterial, separate from the brain but interacting with it in some way. A few neuroscientists, such as Sir John Eccles, still assert that the soul is distinct from the body. Nonetheless, most neuroscientists now believe that all aspects of mind, including its most puzzling attribute. Consciousness or awareness is likely to be explainable in a more materialistic way as the behaviour of large sets of interacting neurons. As William James, the father of American psychology, said a century ago, consciousness is not a thing but a process.

Exactly what the process is, as, yet, to be discovered. For many years after James penned The Principles of Psychology, consciousness was a taboo concept in American psychology because of the dominance of the behaviorist movement. With the advent of cognitive science in the mid-1950s, it became possible again for psychologists to consider mental processes as opposed to merely observing behaviour. In spite of these changes, until recently most cognitive scientists ignored consciousness, as did most neuroscientists. The problem was felt to be either purely ‘philosophical’ or too elusive to study experimentally. Getting a grant just to study consciousness would not have been easy for neuroscientists.

Such timidity is ridiculous, so to think about how best to attack the problem scientifically, may be in how to explain mental events as caused by the firing of large sets of neurons? Although there are those who believe such an approach is hopeless, however, worrying too much over aspects of the problem that cannot be solved scientifically is not productive or, more precisely, that it cannot be solved solely by using existing scientific ideas. Radically new concepts may be needed to recall, and the modifications of scientific thinking may be forced upon us by quantum mechanics. Seemingly, the only sensible approach is to press the experimental attack until we are confronted with dilemmas that call for new ways of thinking.

There are many possible approaches to the problem of consciousness. Some psychologists feel that any satisfactory theory should try to explain as many aspects of consciousness as possible, including emotion, imagination, dreams, mystical experiences and so on. Although such an all-embracing theory will be necessary over time, it is wiser to begin with the particular aspect of consciousness that is likely to yield most easily. What this aspect may be a matter of personal judgment. Selecting the mammalian visual system because humans are very visual animals and because so much experimental and theoretical work has already been done on it.

Grasping exactly what we need to explain is not easy, and it will take many careful experiments before visual consciousness can be described scientifically. In that, no attempt to define consciousness is of itself the dangers of premature definitions. (If this seems like a copout, try defining the word ‘gene’- you will not find it easy.) Yet the experimental evidence that already exists provides enough of a glimpse of the nature of visual consciousness to guide research.

Visual theorists agree that the problem of visual consciousness is ill-posed. The mathematical term ‘ill posed’ means that additional constraints are needed to solve the problem. Although the main function of the visual system is to perceive objects and events in the world around us, the information available to our naked eyes is not sufficient by itself to provide the brain with its unique interpretation of the visual world. The understanding held within the brain must essential use experience (either its own or that of our distant descendabilities, which is embedded in our genes) to help interpret the information coming into our eyes. An example would be the derivation of the three-dimensional representation of the world from the two-dimensional signals falling onto the retinas of our two eyes or even onto one of them.

Visual theorists also would agree that seeing is a constructive process, one in which the brain has to carry out complex activities (sometimes called computations) to decide which interpretation to adopt of the ambiguous visual input. ‘Computation’ implies that the brain acts to form a symbolic representation of the visual world, with a mapping (in the mathematical sense) of certain aspects of that world onto elements in the brain.

Ray Jackendoff of Brandeis University postulates, as do most cognitive scientists, that the computations carried out by the brain are largely unconscious and that what we become aware of is the result of these computations. However, while the customary view is that this awareness occurs at the highest levels of the computational system, Jackendoff has proposed an intermediate-level theory of consciousness.

What we see, Jackendoff suggests, relates to a representation of surfaces that are directly visible to us, with their outline, orientation, colour, texture and movement. (This idea has similarities to what the late David C. Marr of the Massachusetts Institute of Technology called a 2 ½ dimensional sketch. It is more than a two-dimensional sketch because it conveys the orientation of the visible surfaces. It is less than three-dimensional because depth information is not explicitly represented.) In the next stage this sketch is processed by the brain to produce a three-dimensional representation. Jackendoff argues that we are not usually aware of this three-dimensional representation.

An example may make this process clearer. If you look at a person whose back is turned to you, you can see the back of the head but not the face. Nevertheless, your brain infers that the person has a face. We can deduce as much because if that person turned around and had no face, you would be very surprised.

The viewer - entering representation is that he might correspond to the visible support of the head from which its back-end is usually proven as the observable aid under which that you are vividly aware. What your brain infers about the front would come from some kind of three-dimensional representation. This does not mean that information flows only from the surface representation to the three-dimensional one; It almost flows in both directions. When you imagine the front of the face, what you are aware of is a surface representation generated by information from the three-dimensional model.

Distinguishing it between an explicit and an implicit representation is important. An explicit representation is something symbolized without further processing. An implicit representation contains the same information but requires further processing to make it explicit. The pattern of coloured pixels on a television screen, for example, contains an implicit representation of objects (say, a person's face), but only the dots and their locations are explicit. When you see a face on the screen, there must be neurons in your brain whose firing, in some sense, symbolizes that face.

We call this pattern of firing neurons an active representation. A latent representation of a face must also be stored in the brain, probably as a special pattern of synaptic connections between neurons. For example, you probably have a representation of The Sky Dome in your brain, a representation that is usually inactive. If you do think about the Dome, the representation becomes active, with the relevant neurons firing away.

An object, incidentally, may be represented in more than one way - as a visual image, as a set of words and their related sounds, or even as a touch or a smell. These different representations are likely to interact with one another. The representation is likely to be distributed over many neurons, both locally and more globally. Such a representation may not be as simple and straightforward as uncritical introspection might indicate. There is suggestive evidence, in that it is partly from studying how neurons fire in various parts of a monkey's brain and partly from examining the effects of certain types of brain damage in humans. That these different aspects of a face and of the implications of a face - may be represented in different parts of the brain.

First, there is the representation of a face as a face, two eyes, a nose, a mouth and so on. The neurons involved are usually not too fussy about the exact size or position of this face in the visual field, nor are they very sensitive to small changes in their orientation. In monkeys, there are neurons that respond best when the face is turning in a particular direction, while others are more concerned with the direction in which the eyes are gazing.

Then there are representations of the parts of a face, as separate from those for the face as a whole. What is more, that the implications of seeing a face, such as that person's sex, the facial expression, the familiarity or unfamiliarity of the face, and in particular whose face it is, may each be correlated with neurons firing in other places.

What we are aware of at any moment, in one sense or another, is not a simple matter. It is to suggest, that there may be a very transient form of fleeting awareness that represents only simple features and does not require an attentional mechanism. From this brief awareness the brain constructs a viewer - cantered representation - what we see vividly and clearly - that does require attention. This in turn probably leads to three-dimensional object representations and thence to more cognitive ones.

Representations corresponding to vivid consciousness are likely to have special properties. William James thought that consciousness to involve both attention and short-term memory. Most psychologists today would agree with this view. Jackendoff writes that consciousness is ‘enriched’ by attention, implying that whereas attention may not be essential for certain limited types of consciousness, it is necessary for full consciousness. Yet it is not clear exactly which forms of memory are involved. Is long-term memory needed? Some forms of acquired knowledge are so embedded in the machinery of neural processing that they are almost used in becoming aware of something. On the other hand, there is evidence from studies of brain-damaged patients that the ability to lay down new long-term episodic memories is not essential for consciousness to be experienced.

Imagining that anyone could be conscious is difficult if he or she had no memory whatsoever of what had just happened, even an extremely short one. Visual psychologists talk of iconic memory, which lasts for a fraction of a second, and working memory (such as that used to remember a new telephone number) that lasts for only a few seconds unless it is rehearsed. It is not clear whether both are essential for consciousness. In any case, the division of short-term memory into these two categories may be too crude.

If these complex processes of visual awareness are localized in parts of the brain, which processes are likely to be where? Many regions of the brain may be involved, but it is almost certain that the cerebral neocortex plays a dominant role. Visual information from the retina reaches the neocortex mainly by way of a part of the thalamus (the lateral geniculate nucleus), being of another significant visual pathway, of which the retina is to the superior colliculus, at the top of the brain stem.

The cortex in humans consists of two intricately folded sheets of nerve tissue, one on each side of the head. These sheets are connected by a large tract of about half a billion axons called the corpus callosum. It is well known that if the corpus callosum is cut, as is done for certain cases of intractable epilepsy, one side of the brain is not aware of what the other side is seeing. In particular, the left side of the brain (in a right-handed person) appears not to be aware of visual information received exclusively by the right side. This shows that none of the information required for visual awareness can reach the other side of the brain by travelling down to the brain stem and, from there, back up. In a normal person, such information can get to the other side only by using the axons in the corpus callosum.

A different part of the brain - the hippocampal system - is involved in one-shot, or episodic, memories that, over weeks and months, it passes on to the neocortex. This system is so placed that it receives inputs from, and projects to, many parts of the brain. Thus, one might suspect that the hippocampal system is the essential seat of consciousness. This is not true: Evidence from studies of patients with damaged brains shows that this system is not essential for visual awareness, although naturally a patient lacking one is severely disabled in everyday life because he cannot remember anything that took place more than a minute or so in the past.

In broad terms, the neocortex of alert animals probably acts in two ways. By building on crude and redundant wiring, for which is produced by our genes and by embryonic processes. The neocortex draws on visual and other experience to prolong the ‘filament’ to assimilate itself and create sectional categories (or "features") it can respond to. A new category is not fully created in the neocortex after exposure to only one example of it, although some small modifications of the neural connections may be made.

The second function of the neocortex (at least of the visual part of it) is to respond extremely rapidly to incoming signals. To do so, it uses the categories it has learned and tries to find the combinations of active neurons that, because of its experience, are most likely to represent the relevant objects and events in the visual world at that moment. The formation of such coalitions of active neurons may also be influenced by biases coming from other parts of the brain: For example, signals telling it what best to attend to or high-level expectations about the nature of the stimulus.

Consciousness, as James noted, is always changing. These rapidly formed coalitions occur at different levels and interact to form even broader coalitions. They are transient, lasting usually for only a fraction of a second. Because coalitions in the visual system are the basis of what we see, evolution has seen to it that they form as fast as possible, otherwise, no animal could survive. The brain is impeded in forming neuronal coalitions rapidly because, by computer standards, neurons act very slowly. The brain formally compensates of stabilizing the account for which this relative slowness is partially used through a number of neurons, simultaneously and in parallel, and partly by arranging the system in a roughly hierarchical manner.

If visual awareness at any moment corresponds to sets of neurons firing, then the obvious question is: Where are these neurons located in the brain, and in what way are they firing? Visual awareness is highly unlikely to occupy all the neurons in the neocortex that are firing above their background rate at a particular moment. It would be to expect that, theoretically, at least some of these neurons would be involved in doing computations - trying to arrive at the best coalitions - whereas others would express the results of these computations, in other words, what we see.

Fortunately, some experimental evidence can be found to back up this theoretical conclusion. A phenomenon called binocular rivalry may help identify the neurons whose firing symbolizes awareness. This phenomenon can be seen in dramatic form in an exhibit prepared by Sally Duensing and Bob Miller at the Exploratorium in San Francisco.

Binocular rivalry occurs when each eye has a different visual input relating to the same part of the visual field. The early visual system on the left side of the brain receives an input from both eyes but sees only the part of the visual field to the right of the fixation point. The converse is true for the right side. If these two conflicting inputs are rivalrous, one sees not the two inputs superimposed but first one and then the other, and so given alternatively.

In the exhibit, called "The Cheshire Cat," viewers put their heads in a fixed place and are told to keep the gaze fixed. By means of a suitably a placed mirror, one of the eyes can look at another person's face, directly in front, while the other eye sees a blank white screen to the side. If the viewer waves a hand in front of this plain screen at the same location in his or her visual field occupied by the face, the face is wiped out. The movement of the hand, being visually very salient, has captured the brain's attention. Without attention the face cannot be seen. If the viewer moves the eyes, the face reappears.

In some cases, only part of the face disappears. Sometimes, for example, one eye, or both eyes, will remain. If the viewer looks at the smile on the person's face, the face may disappear, leaving only the smile. For this reason, the effect has been called the Cheshire Cat effect, after the cat in Lewis Carroll's Alice's Adventures in Wonderland.

Although recording activity in individual neurons in a human brain is very difficult, such studies can be done in monkeys. A simple example of binocular rivalry has been studied in a monkey by Nikos K. Logothetis and Jeffrey D. Schall, both then at M.I.T. They trained a macaque to keep its eye’s still and to signal whether it is seeing upward or downward movement of a horizontal grating. To produce rivalry, upward movement is projected into one of the monkey's eyes and downward movement into the other, so that the two images overlap in the visual field. The monkey signals that it sees up and down movements alternatively, just as humans would. Even though the motion stimulus coming into the monkey's eyes is always the same, the monkey's percept changes every second or so.

Cortical area MT (which some researchers prefer to label V5) is an area mainly concerned with movement. What do the neurons in MT do when the monkey's percept is sometimes up and sometimes down? (The researchers studied only the monkey's first response.) The simplified answer - the actual data are more disorganized - is that whereas the firing of some of the neurons correlates with the changes in the percept, for others the average firing rate is unchanged and independent of which direction of movement the monkey is seeing at that moment. Thus, it is unlikely that the firing of all the neurons in the visual neocortex at one particular moment corresponds to the monkey's visual awareness. Exactly which neurons do correspond to awareness remains to be discovered.

Having postulated that when we clearly see something, there must be neurons actively firing that stand for what, we see. This might be called the activity principle. Here, too, there is some experimental evidence. One example is the firing of neurons in a specific cortical visual area in response to illusory contours. Another and perhaps more striking case are the filling in of the blind spot. The blind spot in each eye is caused by the lack of photoreceptors in the area of the retina where the optic nerve leaves the retina and projects to the brain. Its location is about 15 degrees from the fovea (the visual centre of the eye). Yet if you close one eye, you do not see a hole in your visual field.

Philosopher Daniel C. Dennett of Tufts University is unusual among philosophers in that he is interested both in psychology and in the brain. This interest is much to be welcomed. In a recent book, Consciousness Explained, he has argued that talking about filling in is wrong. He concludes, correctly, that "an absence of information is not the same as information about an absence." From this general principle he argues that the brain does not fill in the blind spot but ignores it.

Dennett's argument by itself, however, does not establish that filling in does not occur; it only suggests that it might not. Dennett also states that "your brain has no machinery for [filling in] at this location." This statement is incorrect. The primary visual cortex lacks a direct input from one eye, but normal "machinery" is there to deal with the input from the other eye. Ricardo Gattass and his colleagues at the Federal University of Rio de Janeiro have shown that in the macaque some of the neurons in the blind-spot area of the primary visual cortex do respond to input from both eyes, probably assisted by inputs from other parts of the cortex. Moreover, in the case of simple filling in, some of the neurons in that region respond as if they were actively filling in.

Thus, Dennett's claim about blind spots is incorrect. In addition, psychological experiments by Vilayanur S. Ramachandran have shown that what is filled of a volume in a can be quite complex depending on the overall context of the visual scene. How, he argues, can your brain be ignoring something that is in fact commanding attention?

Filling in, therefore, is not to be dismissed as nonexistent or unusual. It probably represents a basic interpolation process that can occur at many levels in the neocortex. It is, incidentally, a good example of what is meant by a constructive process.

How can we discover the neurons whose firing symbolizes a particular percept? William T. Newsome and his colleagues at Stanford University have done a series of brilliant experiments on neurons in cortical area MT of the macaque's brain. By studying a neuron in area MT, we may discover that it responds best to very specific visual features having to do with motion. A neuron, for instance, might fire strongly in response to the movement of a bar in a particular place in the visual field, but only when the bar is oriented at a certain angle, moving in one of the two directions perpendicular to its length within a certain range of speed.

Exciting just a single neuron is technically difficult, but it is known that neurons that respond to roughly the same position, orientation and direction of movement of a bar tend to be located near one another in the cortical sheet. The experimenters taught the monkey a simple task in movement discrimination using a mixture of dots, some moving randomly, the rest all in one direction. They showed that electrical stimulation of a small region in the right place in cortical area MT would bias the monkey's motion discrimination, almost always in the expected direction.

Thus, the stimulation of these neurons can influence the monkey's behaviour and probably its visual percept. Such exploring experiments do not, only show decisively that the firing of such neurons is the exact neural correlate of the percept. The correlate could be only a subset of the neurons being activated. Or perhaps the real correlate is the firing of neurons in another part of the visual hierarchy that is strongly influenced by the neurons activated in area MT.

These same reservations apply also to cases of binocular rivalry. Clearly, the problem of finding the neurons whose firing symbolizes a particular percept is not going to be easy. It will take many careful experiments to track them down even for one kind of percept.

The purpose of vivid visual awareness is obviously to feed into the cortical areas concerned with the implications of what we see, as from its position, the information shuttles on the one hand to the hippocampal system, to be encoded (temporarily) into long-term episodic memory, and on the other to the planning levels of the motor system. Nevertheless, is it possible to go from a visual input to a behavioural output without any relevant visual awareness?

That such a process can happen is demonstrated by the remarkable class of patients with ‘blind-sight’. These patients, all of whom have suffered damage to their visual cortex, can point with fair accuracy at visual targets or track them with their eyes while vigorously denying seeing anything. In fact, these patients are as surprised as their doctors by their abilities. The amount of information that ‘gets through’, however, is limited: Blind-sight patients have some ability to respond to wavelength, orientation and motion, yet they cannot distinguish a triangle from a square.

It is naturally of great interest to know which neural pathways are being used in these patients. Investigators originally suspected that the pathway ran through the superior colliculus. Recent experiments suggest that a direct but weak connection may be involved between the lateral geniculate nucleus and other visual areas in the cortex. It is unclear whether an intact primary visual cortex region is essential for immediate visual awareness. Conceivably the visual signal in blind-sight is so weak that the neural activity cannot produce awareness, although getting through to the motor system remains strong enough.

Normal-seeing people regularly respond to visual signals without being fully aware of them. In automatic actions, such as swimming or driving a car, complex but stereotypical actions occurred with little, if any, associated visual awareness. In other cases, the information conveyed is either very limited or very attenuated. Thus, while we can function without visual awareness, our behaviour without it is restricted.

Clearly, it takes a certain amount of time to experience a conscious percept. It is tediously difficult to determine just how much time is needed for an episode of visual awareness, but one aspect of the problem that can be demonstrated experimentally is that signals received close together in time are treated by the brain as simultaneous.

A disk of red light is flashed for, say, 20 milliseconds, followed immediately by a 20-millisecond flash of green light in the same place. The subject reports that he did not see a red light followed by a green light. Instead he saw a yellow light, just as he would have if the red and the green light had been flashed simultaneously. Yet the subject could not have experienced yellow until after the information from the green flash had been processed and integrated with the preceding red one.

Experiments of this type led psychologist Robert Efron, now at the University of California at Davis, to conclude that the processing period for perception is about 60 to 70 milliseconds. Similar periods are found in experiments with tones in the auditory system. It is always possible, however, that the processing times may be different in higher parts of the visual hierarchy and in other parts of the brain. Processing is also more rapid in trained, compared with naive, observers.

Because it appears to be involved in some forms of visual awareness, it would help if we could discover the neural basis of attention. Eye movement is a form of attention, since the area of the visual field in which we see with high resolution is remarkably small, roughly the area of the thumbnail at arms’ length. Thus, we move our eyes to gaze directly at an object in order to see it more clearly. Our eyes usually move three or four times a second. Psychologists have shown, however, that there appears to be a faster form of attention that moves around, in some sense, when our eyes are stationary.

The exact psychological nature of this faster attentional mechanism is currently questionable. Several neuroscientists, however, including Robert Desimone and his colleagues at the National Institute of Mental Health, have shown that the rate of firing of certain neurons in the macaque's visual system depends on what the monkey is attending too in the visual field. Thus, attention is not solely a psychological concept; it also has neural correlates that can be observed. A number of researchers have found that the pulvinars, a region of the thalamus, appears to be involved in visual attention. We would like to believe that the thalamus deserve to be called ‘the organ of attention’, but this status has yet to be established.

The major problem is to find what activity in the brain corresponds directly to visual awareness. It has been speculated that each cortical area produces awareness of only those visual features that are ‘columnar’, or arranged in the stack or column of neurons perpendicular to the cortical surface. Thus, the primary visual cortex could code for orientation and area MT for motion. So far experientialists have not found one particular region in the brain where all the information needed for visual awareness appears to come together. Dennett has dubbed such a hypothetical place ‘The Cartesian Theatre’. He argues on theoretical grounds that it does not exist.

Awareness seems to be distributed not just on a local scale, but more widely over the neocortex. Vivid visual awareness is unlikely to be distributed over every cortical area because some areas show no response to visual signals. Awareness might, for example, be associated with only those areas that connect back directly to the primary visual cortex or alternatively with those areas that project into one another's layer four (The latter areas are always at the same level in the visual hierarchy.)

The key issue, then, is how the brain forms its global representations from visual signals. If attention is crucial for visual awareness, the brain could form representations by attending to just one object at a time, rapidly moving from one object to the next. For example, the neurons representing all the different aspects of the attended object could all fire together very rapidly for a short period, possibly in rapid bursts.

This fast, simultaneous firing might not only excite those neurons that symbolized the implications of that object but also temporarily strengthen the relevant synapses so that this particular pattern of firing could be quickly recalled in the form of short-term memory. If only one representation needs to be held in short-term memory, as in remembering a single task, the neurons involved may continue to fire for a period.

A problem arises if being aware of more than one object at absolute measure from the corresponding outlet in time is imperative. If all the attributes of two or more objects were represented by neurons firing rapidly, their attributes might be confused. The colour of one might become attached to the shape of another. This happens sometimes in very brief presentations.

Some time ago Christoph von der Malsburg, now at the Ruhr-Universität Bochum, suggested that this difficulty would be circumvented if the neurons associated with anyone objects all fired in synchrony (that is, if their times of firing were correlated) but out of synchrony with those representing other objects. Recently two groups in Germany reported that there does appear to be correlated firing between neurons in the visual cortex of the cat, often in a rhythmic manner, with a frequency in the 35- to 75-hertz range, sometimes called 40-hertz, or g, oscillation.

Von der Malsburg's proposal prompted to suggest that this rhythmic and synchronized firing might be the neural correlate of awareness and that it might serve to bind together activity concerning the same object in different cortical areas. The matter is still undecided, but at present the fragmentary experimental evidence does little to support such an idea. Another possibility is that the 40-hertz oscillations may help distinguish figures from ground or assist the mechanism of attention.

Are there some particular types of neurons, distributed over the visual neocortex, whose firing directly symbolizes the content of visual awareness? One very simplistic hypothesis is that the activities in the upper layers of the cortex are largely unconscious ones, whereas the activities in the lower layers (layers five and six) mostly correlate with consciousness. We have wondered whether the pyramidal neurons in layer five of the neocortex, especially the larger ones, might play this latter role.

These are the only cortical neurons that project right out of the cortical system (that is, not to the neocortex, the thalamus or the claustrum). If visual awareness represents the results of neural computations in the cortex, one might expect that what the cortex sends elsewhere would symbolize those results. What is more, the neurons in layer five show an unusual propensity to fire in bursts. The idea that layer five neurons may directly symbolize visual awareness is attractive, but it still is too early to tell whether there is anything in it.

Visual awareness is clearly a difficult problem. More work is needed on the psychological and neural basis of both attention and very short-term memory. Studying the neurons when a percept changes, even though the visual input is constant, should be a powerful experimental paradigm. We need to construct neurobiological theories of visual awareness and test their using a combination of molecular, neurobiological and clinical imaging studies.

It is strongly believed that once we have mastered the secret of this simple form of awareness, we may be close to understanding a central mystery of human life: How the physical events occurring in our brains while we think and act in the world relate to our subjective sensations - that is, how the brain relates to the mind.

Afterthought or precept, may that it is that it now seems likely that there are rapid ‘on-line’ systems for stereotyped motor responses such as hand or eye movement. These systems are unconscious and lack memory. Conscious seeing, on the other hand, seems to be slower and more subject to visual illusions. The brain needs to form a conscious representation of the visual scene that it then can use for many different actions or thoughts. Precisely, how all these pathways’ work may by some enacting deliverance as to how they interact is far from clear.

Still, it is probably too early to draw firm conclusions from them about the exact neural correlates of visual consciousness. We have suggested that on theoretical grounds are based the neuroanatomy of the macaque monkey that primates are not directly aware of what is happening in the primary visual cortex, even though most of the visual information flows through it. Although on hypothetical grounds that are supported by some experimental evidence, with the exception, that, it is still controversial.

Let us consider once again, if for example, mindfully you rotate the letter”N” 90 degrees to the right, is a new letter formed? In seeking answers such that to are fractionally vulnerable to the plexuities of the mind’s eye, scientists say, most people conjure up an image in their mind’s eye, mentally ‘look’ at it, add details one a time and describe what they see. They seem to have a definite picture in their heads, but where in the brain are these images formed? How are they generated? How do people ’move things around’ in their imaginations?

Using clues from brain-damaged patients and advanced brain imaging techniques, neuroscientists have now found that the brain uses virtually identical pathways for seeing objects and for imagining them, only it uses these pathways in reverse.

In the process of human vision, a stimulus in the outside world is passed from the retina to the primary visual cortex and then to higher centres until an object or event is recognized. In mental imaging, a stimulus originates in higher centres and is passed down to the primary visual cortex. Where it is recognized.

The implications are beguiling. Scientists say that for the first time they are glimpsing the biological basis for abilities that make some people better at math or art or flying fighter aircraft. They can now explain why imagining oneself shooting baskets like Michael Jordan can improve one’s athletic performance. In a finding that raises troubling questions about the validity of eyewitness testimony, they can show that an imagined object is, to the observer’s brain at least, every bit as real as one that is seen.

“People have always wondered if there are pictures in the brain.” More recently, the debate centred on a specific query: As a form of thought, is mental imagery rooted in the abstract symbols of language or in the biology of the visual system?

The biology arguments are winning converts every day. The new findings are based on the notion that mental capacities like memory, perception, mental imagery, language and thought are rooted in complex underlying structures in the brain. Thus an image held in the mind’s eye has physically than ethereal properties. Mental imagery research has developed apart with research on the human visual system. Each provides clues to the other helping along the forcing out the details of a highly complex system.

Vision is not a single process but the linking of subsystems that process specific aspects of vision. To understand how this works, we are to consider looking at an apple on a picnic table ten feet away. Light reflects off the apple, hits the retina and is sent through nerve fibres to an early visual way station might that we call the visual buffer, here the apple image is literally mapped onto the surface of brain tissue as it appears in space, with high resolution. “You can think of the visual buffer is a screen,” as if it were “A picture can be displayed on the screen from the camera, which are your eyes, or from a videotape recorder, which is your memory.”

In this case, the image of the apple is held on the screen as the visual buffer carries out a variety of other features are examined separately. Still, the brain does not as yet know is seeing an apple. Next, distinct features of the apple are sent to two higher subsystems for further analysis, as they are often referred to as the ‘what’ system and the ‘where’ system. The brain needs to match the primitive apple pattern with memories and knowledge about apples, in that it seeks knowledge from visual memories that are held like videotapes in the brain.

The ‘what’ system, in the temporal lobe, contains cells that are tuned for specific shapes and colours of objects, some respond to red, round objects in an infinite variety of positions, ignoring local space. Thus, the apple could be on a distant tree, on the picnic table or in front of your nose, thereby, it would still stimulate cells tuned for red round objects, which might be apples, beach balls or tomatoes.

The ‘where’ system, in the parietal lobe, contains cells that are tuned to fire when objects are in different locations. If the apple is far away, one set of cells is activated, while another set fires if the apple is close up. Thus the brain has a way of knowing where objects are in space so the body can navigate accordingly.

When cells in the ‘what’ and ‘where’ systems are stimulated, they may combine their signals in yet a higher subsystem where associative memories are stored, such as this systems are like a card file where visual memories, as if held on videotapes, can be looked up and activated. If the signals from the ‘what’ and ‘where’ system finds a good match in associative memory, one is to find the knowable object as an apple. You, however, also know what it tastes and smells like, that it has seeds, that it can be made into our favourite pie and everything else stored in your brain about apples.

However, sometimes, recognition does not occur at the level of associative memory. Because it is far away, the red object on the picnic table could be a tomato or an apple. You are not sure of its identity, and so, another level of analysis kicks in.

This highest level, in the frontal lobe, is where decisions are made, and to use the same analogy, it is like a catalogue for the videotape is in the brain. You look up features about the image to help you identify it. A tomato has a pointed leaf, while an apple has a slender stem. When the apple stem is found at this higher level, the brain decides that it has an apple in its visual field.

Signals are then fired back down through the system to the visual buffer and the apple is recognized. Significantly, every visual information that sends information upstream through nerve fibres also receives information back from that area. Information flows richly in both directions at all times.

Mental imagery is the result of this duality. Instead of a visual stimulus, a mental stimulus activates the system. The stimulus can be anything, including a memory, odour, face, reverie, song or question, for example, you look up the videotape in associative memory for cat, images are based on previously encoded representations of shape, whereby you look up the videotape in associative memory for cat.

When that subsystem initiated, a general image of a cat was mapped out on the screen, or the visual buffer, in the primary visual cortex. It is a tripped-down version of a cat and everyone’s version is different. Its preliminary mapping calls in detail of whether the cat has curved claws? To find out, the mind’s eye shifts attention and goes back to higher subsystems where detailed features are stored. Activating the curved claws tape, then zoom back down to the front paws of the cat and you add them to the cat. Thus each image is built up, a part at a time.

The more complex the image, the more time it takes to conjure it in the visual buffer. On the basis of brain scans with the technique known as positron emission tomography, its estimates that are required range from 75 to 100 thousandths of a second that to add each new part.

The visual system maps imagined objects and scenes precisely, mimicking the real world, in that, you scan it and study it as it was there.

How is this to be? This can be demonstrated when people are asked to imagine objects at different sizes: “Imagine a tiny honeybee.” “What colour is its head?” To do this, people have to take time to zoom in on the bee’s head before they can answer. Conversely, objects can be imagined so that they overflow the visual field. “Imagine walking toward a car,” “It looms larger as you get closer to it. There comes a point where you cannot see the car at once, in that it seems to overflow the screen in your mind’s eye.”

People with brain damage often demonstrate that the visual systems are doing double duty. For example, stroke patients who lose the ability to see colours also cannot imagine colours.

An epilepsy patient experienced a striking change in her ability to imagine objects after her right occipital lobe was removed to reduce seizures. On the same line, before surgery, the woman estimated she would stand, in her mind’s eye, about 14 feet from a horse before it overflowed her visual field. After surgery, she estimated the overflow at 34 feet. Her field of mental imagery was reduced by half.

Another patient underwent to endure the disability affecting his ‘what’ system while his ‘where’ system was intact. If you were to ask him to imagine what colour is the inside of a watermelon, he does not know, and if you press him, he might guess blue. However, if you ask him, is Toronto closer to London than Winnipeg, he answers correctly instantly.

Imaging studies of healthy brains produce similar findings, As when a person is asked to look at and then to imagine an object, the same brain areas are activated. When people add detail to images, they use the same circuits used in vision. Interestingly, people who say they are vivid, as imaginers can show stronger activation of the relevant areas in the brain.

People use imagery in their everyday lives to call up information in memory, at which time, to initiate reason and to learn new skills, the scientists say. It can lead to creativity. Albert Einstein apparently got his first insight into relativity when he imagined chasing after and matching the speed of a beam of light.

It can improve athletic skills. When you see a gifted athlete, move in a particular way, you not how he or she moves, and you can use that information to program your own physiological techniques as to improve everyone. Basically, his brain uses the same representations in the ‘where’ system to help direct actual movements and imagined movements. Thus, refining these representations in imagery will transfer frowardly into the actualized momentum, provided the motions are physically practised.

Humans exhibit vast individual differences in various component’s of mental imaging, which may help explain certain talents and predilection. Fighter pilots, for example, can imagine the rotation of complex objects in a flash, but most people need time to imagine such tasks.

Currently in progress, are studies in the examination of the brains of mathematicians and artists with a new imaging machine that reveals individual difference in the way brains are biologically wired up, still, they are looking to see if people who are good at geometry have different circuitry from those people who are good at algebra.

In a philosophical conundrum arising from the new research, it seems that people can confuse what is real and what is imagined, raising questions about witnesses’ testimony and memory itself.

Meanwhile, in visual perception, must have some really superb mechanistic actions in our favour, yet to be considered is the expectation to expect, if not only for yourself, however, in order to see an object when you have a part of the picture. As these expectations allow you to see an apple, its various fragments can drive the system into producing the image of an apple in your visual buffer. In other words, you prime yourself so much that you particularly can play the apple tape from your memory banks. Thus, people can be fooled by their mind’s eye, justly of imaging a man standing before a frightened store clerk and you quicken to assume a robbery is under way. It is dark and he is in the shadow. Because you expect to see a gun, your thresholds are lowered and you may occasion to run the tape for a gun, even though it is not there. As far as your brain is concerned, it saw a gun, yet it may not have been real.

Luckily, inputs from the eye tend to be much stronger than inputs from imagination, but on a dark night, under certain circumstances, it is easy to be fooled by one’s own brain.

It is amazing that imagination and reality are not confused more often, least of mention, images are fuzzier and less coherent than real memories, and humans are able to differentiate them by how plausible they seem.

Although our new epistemological situation suggests that questions regarding the character of the whole no longer lies within the domain of science, derived from the classical assumption that knowledge of all the constituent parts of a mechanistic universe is equal to knowledge of the whole. This paradigm sanctioned the Cartesian division between mind and world that became a pervasive preoccupation in western philosophy, art, and literature beginning in the seventeenth century. This explains in no small part why many humanists-social scientists feel that science concerns itself only with the mechanisms of physical reality and it, therefore, indifference or hostile to the experience of human subjectivity - the world where a human being with all his or her myriad sensations, feelings, thoughts, values and beliefs and yet, have a life and subside into an ending.

Nevertheless, man has come to the threshold of a state of consciousness, regarding his nature and his relationship to the cosmos, in terms that reflect ‘reality’. By using the processes of nature as metaphor, to describe the forces by which it operates upon and within man, we come as close to describing ‘reality’ as we can within the limits of our comprehension. Men will be very uneven in their capacity for such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For the reasons that using metaphors and myth to provide will always be necessary ‘comprehensible’ guides to living. In this way, man’s imagination and intellect play vital roles on his survival and evolution.

Notwithstanding, of ethics, differing in transliterations to which underlie such different conceptions of human life as that of the classical Greeks, Christianity and an adherent of Judaism as a religion or the Hebrew’s or the Jewish culture - the Hebrews are a native inhabitants of the ancient kingdom of Judah, least of mention, the lunisolar calendar used by the Jew’s or their culture or their religious mark in the events of the Jewish year, dating the creation of the world at 3761Bc.

Europe owes the Jew no small thanks for making people think more logically and for establishing cleaner intellectual habits - nobody more so than the Germans who are a lamentable déraisonnable [unreasonable] race . . . Wherever Jews have won influence they have taught men to make finer distinctions, more rigorous inferences, and to write in a more luminous and cleanly fashion, their task as ever to bring the people ‘to listen to raison’.

His position is very radical. Nietzsche does not simply deny that knowledge, construed as the adequate representation he the world by the intellect, exists. He also refuses the pragmatist identification of knowledge and truth with usefulness: He writes that we think we know what we think is useful, and that we can be quite wrong about the latter.

Nietzsche’s view, his ‘perspectivism’, depends on his claim that the being not sensible conception of a world independent of human interpretation and of which interpretation would correspond if that were to unite knowledge. He sums up this highly controversial interpretation in The Will to Power: Fact and precisely what there is not, only interpretations.

Perspectivism does not deny that particular views can be like some versions of contemporary anti-realism, and it attributes of a specific approach in truth as fact relationally determines it and justly untenable by those approaches in themselves. Still, it refuses to envisage a single independent set of facts, to be accounted for all theories, thus Nietzsche grants the truth of specific scientific theories, it does, however, deny that a scientific interpretation can possibly be ‘the only justifiable interpretation of the world’, neither the fact’s science addresses nor methods it employs are privileged. Scientific theories serve the purpose for which they have been devised, but they have no priority over the many other purposes of human life.

For those curiously consigned by the uncanny representations brought-about to or by the effectual nature between mind and mental as drive of both Freud and Nietzsche will soon attest to some uncanny theories.

In the late 19th century Viennese neurologist Sigmund Freud developed a theory of personality and a system of psychotherapy known as psychoanalysis. According to this theory, people are strongly influenced by unconscious forces, including innate sexual and aggressive drives. Freud, recounts the early resistance to his ideas and later acceptance of his work. From the outset of a psychoanalysis, Freud attracted followers, many of whom later proposed competing theories. As a group, these neo-Freudians shared the assumption that the unconscious plays and important role in a person’s thoughts and behaviours. Most parted company with Freud, however, over his emphasis on sex as a driving force. For example, Swiss psychiatrist Carl Jung theorized that all humans inherit a collective unconscious that contains universal symbols and memories from their ancestral past. Austrian physician Alfred Adler theorized that people are primarily motivated to overcome inherent feelings of inferiority. He wrote about the effects of birth order in the family and coined the term sibling rivalry. Karen Horney, a German-born American psychiatrist, argued that humans have a basic need for love and security, and become anxious when they feel isolated and alone.

Motivated by a desire to uncover unconscious aspects of the psyche, psychoanalytic researchers devised what is known as projective tests. A projective test asks people to respond to an ambiguous stimulus such as a word, an incomplete sentence, an inkblot, or an ambiguous picture. These tests are based on the assumption that if a stimulus is vague enough to accommodate different interpretations, then people will use it to project their unconscious needs, wishes, fears, and conflicts. The most popular of these tests are the Rorschach Inkblot Test, which consists of ten inkblots, and the Thematic Apperception Test, which consists of drawings of people in ambiguous situations.

Psychoanalysis has been criticized on various grounds and is not as popular as in the past. However, Freud’s overall influence on the field has been deep and lasting, particularly his ideas about the unconscious. Today, most psychologists agree that people can be profoundly influenced by unconscious forces, and that people often have a limited awareness of why they think, feel, and behave as they do.

The techniques of psychoanalysis and much of the psychoanalytic theory based on its application were developed by Sigmund Freud. Squarely, his work concerning the structure and the functioning of the human mind had far-reaching significance, both practically and scientifically, and it continues to influence contemporary thought.

The first of Freud's innovations was his recognition of unconscious psychiatric processes that follow laws differently from those that govern conscious experience. Under the influence of the unconscious, thoughts and feelings that belong together may be shifted or displaced out of context; two disparate ideas or images may be condensed into one; thoughts may be dramatized in images rather than expressed as abstract concepts; and certain objects may be represented symbolically by images of other objects, although the resemblance between the symbol and the original object may be vague or farfetched. The laws of logic, indispensable for conscious thinking, do not apply to these unconscious mental productions.

Recognition of these modes of operation in unconscious mental processes made possibly the understanding of such previously hard to grasp psychological phenomena as dreaming. Through analysis of unconscious processes, Freud saw dreams as serving to protect sleep against disturbing impulses arising from within and related to early life experiences. Thus, unacceptable impulses and thoughts, called the latent dream content, are transformed into a conscious, although no longer immediately comprehensible, experience called the manifest dream. Knowledge of these unconscious mechanisms permits the analyst to reverse the so-called dream work, that is, the process by which the latent dream is transformed into the manifest dream, and through dream interpretation, to recognize its underlying meaning.

A basic assumption of Freudian theory is that the unconscious conflicts involve instinctual impulses, or drives, that originate in childhood. As these unconscious conflicts are recognized by the patient through analysis, his or her adult mind can find solutions that were unattainable to the immature mind of the child. This depiction of the role of instinctual drives in human life is a unique feature of Freudian theory.

According to Freud's doctrine of infantile sexuality, adult sexuality is a product of a complex process of development, beginning in childhood, involving a variety of body functions or areas (oral, anal, and genital zones), and corresponding to various stages in the relation of the child to adults, especially to parents. Of crucial importance is the so-called Oedipal period, occurring at four to six years of age, because at this stage of development the child for the first time becomes capable of an emotional attachment to the parent of the opposite sex that is similar to the adult's relationship to a mate; the child simultaneously reacts as a rival to the parent of the same sex. Physical immaturity dooms the child's desires to frustration and his or her first step toward adulthood to failure. Intellectual immaturity further complicates the situation because it makes children afraid of their own fantasies. The extent to which the child overcomes these emotional upheavals and to which these attachments, fears, and fantasies continue to live on in the unconscious greatly influences later life, especially loves relationships.

The conflicts occurring in the earlier developmental stages are no less significant as a formative influence, because these problems represent the earliest prototypes of such basic human situations as dependency on others and relationship to authority. Also, basic in moulding the personality of the individual is the behaviour of the parents toward the child during these stages of development. The fact that the child reacts, not only to objective reality, but also to fantasy distortions of reality, and, however, greatly complicates even the best-intentioned educational efforts.

The effort to clarify the bewildering number of interrelated observations uncovered by psychoanalytic exploration led to the development of a model of the structure of the psychic system. Three functional systems are distinguished that are conveniently designated as the id, ego, and superego.

The first system refers to the sexual and aggressive tendencies that arise from the body, as distinguished from the mind. Freud called these tendencies ‘Triebe’, which literally means “drives,” but is often inaccurately translated as “instincts” to show their innate character. These inherent drives claim immediate satisfaction, which is experienced as pleasurable; the id thus is dominated by the pleasure principle. In his later writings, Freud tended more toward psychological rather than biological conceptualization of the drives.

How the conditions for satisfaction are to be caused is the task of the second system, the ego, which is the domain of such functions as perception, thinking, and motor control that can accurately assess environmental conditions. To fulfill its function of adaptation, or reality testing, the ego can enforce the postponement of satisfaction of the instinctual impulses originating in the id. To defend it against unacceptable impulses, the ego develops specific psychic means, known as defence mechanisms. These include repression, the exclusion of impulses from conscious awareness; projection, the process of ascribing to others one's own unacknowledged desires; and reaction formation, the establishments of a pattern of behaviour directly opposed to a strong unconscious need. Such defence mechanisms are put into operation whenever anxiety signals a danger that the original unacceptable impulses may reemerge.

An id impulse becomes unacceptable, not only from a temporary need for postponing its satisfaction until suitable reality conditions can be found, but more often because of a prohibition imposed on the individual by others, originally the parents. All these demands and prohibitions are the major content of the third system, the superego, the function of which is to control the ego according to the internalized standards of parental figures. If the demands of the superego are not fulfilled, the person may feel shame or guilt. Because the superego, in Freudian theory, originates in the struggle to overcome the Oedipal conflict, it has a power akin to an instinctual drive, is in part unconscious, and can cause feelings of guilt not justified by any conscious transgression. The ego, having to mediate among the demands of the id, the superego, and the outside world, may not be strong enough to reconcile these conflicting forces. The more the ego is impeded in its development because of being enmeshed in its earlier conflicts, called fixations or complexes, or the more it reverts to earlier satisfactions and archaic modes of functioning, known as regression, the greater is the likelihood of succumbing to these pressures. Unable to function normally, it can maintain its limited control and integrity only at the price of symptom formation, in which the tensions are expressed in neurotic symptoms.

A cornerstone of modern psychoanalytic theory and practice is the concept of anxiety, which makes appropriate mechanisms of defence against certain danger situations. These danger situations, as described by Freud, are the fear of abandonment by or the loss of the loved one (the object), the risk of losing the object's love, the danger of retaliation and punishment, and, finally, the hazard of reproach by the superego. Thus, symptom formation, character and impulse disorders, and perversions, and sublimations, represent compromise formations - different forms of an adaptive integration that the ego tries to achieve through essentially successfully reconciling the different conflicting forces in the mind.

Various psychoanalytic schools have adopted other names for their doctrines to show deviations from Freudian theory.

Carl Gustav Jung, one of the earliest pupils of Freud, eventually created a school that he preferred to call analytical psychology. Like Freud, Jung used the concept of the libido; However, to him it meant not only sexual drives, but a composite of all creative instincts and impulses and the entire motivating force of human conduct. According to his theories, the unconscious is composed of two parts, the personal unconscious, which contains the results of the completion by the individualities as characterologically is the entity of the experience, and the collective unconscious, the reservoir of the experience of the human race. In the collective unconscious exist many primordial images, or archetypes, common to all individuals of a given country or historical era. Archetypes take the form of bits of intuitive knowledge or apprehension and normally exist only in the collective unconscious of the individual. When the conscious mind contains no images, however, as in sleep, or when the consciousness is caught off guard, the archetypes commence to function. Archetypes are primitive modes of thought and tend to personify natural processes as to such mythological concepts as good and evil spirits, fairies, and dragons. The mother and the father also serve as prominent archetypes.

An important concept in Jung's theory is the existence of two basically different types of personality, mental attitude, and function. When the libido and the individual's general interest are turned outward toward people and objects of the external world, he or she is said to be extroverted. When the reverse is true, and libido and interest are entered on the individual, he or she is said to be introverted. In a completely normal individual these two tendencies alternate, neither dominating, but usually the libido is directed mainly in one direction nor the other; as a result, two personality types are recognizable.

Jung rejected Freud's distinction between the ego and superego and recognized part of the personality, similar to the superego, that he called the persona. The persona consists of what a person may be to others, in contrast to what he or she is. The persona is the role the individual chooses to play in life, the total impression he or she wishes to make on the outside world.

Alfred Adler, another of Freud's pupils, differed from both Freud and Jung in stressing that the motivating force in human life is the sense of inferiority, which begins when an infant can comprehend the existence of other people who are better able to care for themselves and cope with their environment. From the moment the feeling of inferiority is established, the child strives to overcome it. Because inferiority is intolerable, the compensatory mechanisms set up by the mind may get out of hand, resulting in - centred neurotic attitudes, overcompensations, and a retreat from the real world and its problems.

Adler laid particular stress on inferiority feelings arising from what he regarded as the three most important relationships: those between the individual and work, friends, and loved ones. The avoidance of inferiority feelings in these relationships leads the individual to adopt a life goal that is often not realistic and is frequently expressed as an unreasoning will to power and dominance, leading to every type of antisocial behaviour from bullying and boasting to political tyranny. Adler believed that analysis can foster a sane and rational “community feeling” that is constructive rather than destructive.

Another student of Freud, Otto Rank, introduced a new theory of neurosis, attributing all neurotic disturbances to the primary trauma of birth. In his later writings he described individual development as a progression from complete dependence on the mother and family, to a physical independence coupled with intellectual dependence on society, and finally to complete intellectual and psychological emancipation. Rank also laid great importance on the will, defined as “a positive guiding organization and integration of, which uses creatively and inhibits and controls the instinctual drives.”

Later noteworthy modifications of psychoanalytic theory include those of the American psychoanalysts’ Erich Fromm, Karen Horney, and Harry Stack Sullivan. The theories of Fromm lay particular emphasis on the concept that society and the individuals are not separate and opposing forces, that the nature of society is determined by its historic background, and that the needs and desires of individuals are largely formed by their society. As a result, Fromm believed, the fundamental problem of psychoanalysis and psychology is not to resolve conflicts between fixed and unchanging instinctive drives in the individual and the fixed demands and laws of society, but to cause harmony and an understanding of the relationship between the individual and society. Fromm also stressed the importance to the individual of developing the ability to use his or her mentality fully, emotional, and sensory powers.

Horney worked primarily in the field of therapy and the nature of neuroses, which she defined as of two types: situation neuroses and character neuroses. Situation neuroses arise from the anxiety attendant on a single conflict, such for being faced with a difficult decision. Although they may paralyse the individual temporarily, making it impossible to think or act efficiently, such neuroses are not deeply rooted. Character neuroses are characterized by a basic anxiety and a basic hostility resulting from a lack of love and affection in childhood.

Sullivan believed that all development can be described exclusively for interpersonal relations. Character types and neurotic symptoms are explained as results of the struggle against anxiety arising from the individual's relations with others and are some security systems, maintained for allaying anxiety.

An important school of thought is based on the teachings of the British psychoanalyst Melanie Klein. Because most of Klein's followers worked with her in England, this has become known as the English school. Its influence, nevertheless, is very strong throughout the European continent and in South America. Its principal theories were derived from observations made in the psychoanalysis of children. Klein posited the existence of complex unconscious fantasies in children under the age of six months. The principal source of anxiety arises from the threat to existence posed by the death instinct. Depending on how concrete representations of the destructive forces are dealt within the unconscious fantasy life of the child, two basic early mental attitudes result that Klein characterized as a “depressive position” and a “paranoid position.” In the paranoid position, the ego's defence consists of projecting the dangerous internal object onto some external representative, which is treated as a genuine threat emanating from the external world. In the depressive position, the threatening object is introjected and treated in fantasy as concretely retained within the person. Depressive and hypochondriacal symptoms result. Although considerable doubt exists that such complex unconscious fantasies operate in the minds of infants, these observations have been very important to the psychology of unconscious fantasies, paranoid delusions, and theory concerning early object relations.

Freud was born in Freiburg (now Pukbor, Czech Republic), on May 6, 1856, and educated at Vienna University. When he was three years old, his family, fleeing from the anti-Semitic riots then raging in Freiberg, moved to Leipzig. Shortly after that, the family settled in Vienna, where Freud remained for most of his life.

Although Freud’s ambition from childhood had been a career in law, he decided to become a medical student shortly before he entered Vienna University in 1873. Inspired by the scientific investigations of the German poet Goethe, Freud was driven by an intense desire to study natural science and to solve some challenging problems confronting contemporary scientists.

In his third year at the university Freud began research work on the central nervous system in the physiological laboratory under the direction of the German physician Ernst Wilhelm von Brücke. Neurological research was so engrossing that Freud neglected the prescribed courses and as a result remained in medical school three years longer than was required normally to qualify as a physician. In 1881, after completing a year of compulsory, military service, he received his medical degree. Unwilling to give up his experimental work, however, he remained at the university as a demonstrator in the physiological laboratory. In 1883, at Brücke’s urging, he reluctantly abandoned theoretical research to gain practical experience.

Freud spent three years at the General Hospital of Vienna, devoting him successively to psychiatry, dermatology, and nervous diseases. In 1885, following his appointment as a lecturer in neuropathology at Vienna University, he left his post at the hospital. Later the same year he was awarded a government grant enabling him to spend 19 weeks in Paris as a student of the French neurologist Jean Charcot. Charcot, who was the director of the clinic at the mental hospital, the Salpêtrière, was then treating nervous disorders by using hypnotic suggestion. Freud’s studies under Charcot, which entered largely on hysteria, influenced him greatly in channelling his interests to Psychopathology.

In 1886 Freud established a private practice in Vienna specializing in nervous disease. He met with violent opposition from the Viennese medical profession because of his strong support of Charcot’s unorthodox views on hysteria and hypnotherapy. The resentment he incurred was to delay any acceptance of his subsequent findings on the origin of neurosis.

Freud’s first published work, On Aphasia, appeared in 1891; it was a study of the neurological disorder in which the ability to pronounce words or to name common objects is lost because of organic brain disease. His final work in neurology, an article, “Infantile Cerebral Paralysis,” was written in 1897 for an encyclopedia only at the insistence of the editor, since by this time Freud was occupied largely with psychological than physiological explanations for mental illnesses. His subsequent writings were devoted entirely to that field, which he had named psychoanalysis in 1896.

During the early years of the development of psychoanalysis and even afterwards, Freud regarded himself as the bearer of painful truths that people, at least upon first hearing or reading, did not want to face. Psychoanalytically oriented therapy involves facting great pain in giving up certain deeply held, personally important beliefs. If it is understood, Nietzsche’s words would have touched a sympathetic chord in Freud when he wrote that ‘achievable things are truly productive are offensive’. Nietzsche insisted, as did Freud, On resisting the temptations toward easy answerers and superficiality in the face of painful truths. Nietzsche attributes’ of his present days that it is more need than ever of what continues to count as untimely-I mean: Telling the truth. (Even during some things, that truth can be reached and communicated.)

In 1894 The Antichrist and Nietzsche Contra Wagner (both completed in 1888) were first published, Nietzsche refers to himself as a psychologist in both works, referring to such works to his analysis as ‘the psychology of conviction, of faith’. He states that ‘one cannot be a psychologist or physician without at the same time being an anti-Christian,’ that ‘philology and medicine [are] the two great adversaries of superstition. That ‘Faith’ as an imperative is the veto against science.’ Nietzsche offers a psychological analysis of the powerful and primitive forces at work in the experience and condition of faith and a scathing attack on the Apostle Paul. Although Freud had no affectionate feeling for Paul, he was an atheist and understood religious experience and belief from a psychological perspective that was related to Nietzsche’s understanding (as well as Feuerbach to whom both Nietzsche and Feud were indebted. ” On particular importance for psychoanalysis (and for understanding Freud) of the idea of inventing a history (including of one’s self) to convene in the particular resource of needs.

From the early years in the development of psychoanalysis up until the present day, there have been substantial discussion and debate regarding the extent to which Nietzsche discovered and elaborated upon ideas generally ascribed to Freud as well as the extent to which Freud may have been influenced by Nietzsche in his development of a number of fundamental psychoanalytic concepts. In 1929 Thomas Mann, a great admirer of Freud, wrote: “He [Freud] was not acquainted with Nietzsche in whose work everywhere appear like gleams of insight anticipatory of Freud’s later views.” Mann considered Nietzsche to be “the greatest critic and psychologist of morals.” In an early study of the development Freud’s thought, their was suggested that Freud was not aware of certain philosophical influence’s on his thought, that Nietzsche “must perhaps be looked upon as the founder of disillusioning psychology,” that “Nietzsche’s division into Dionysian and Apollonian . . . is almost completely identical with that of the primary and secondary function [process],” an that Nietzsche and certain other writers “were aware that this ream had a hidden meaning and significance for our mental life.” Karl Jaspers, who contributed to the fields o psychiatry, depth psychology and philosophy, frequently commented on Nietzsche’s psychological insights and discussed Nietzsche in relation to Freud and psychoanalysis. In his text, General Psychopathology, only Freud appears more frequently than Nietzsche. He went sofar as to state that Freud and psychoanalysis have used ideas pertaining to the “meaningfulness of psychic deviation . . . in misleading way and this blocked the direct influence on [the study of] Psychopathology of great people such as Kierkegaard and Nietzsche” he wrote of Freud popularizing “in crude form” certain ideas elated to Nietzsche’s concept of sublimation.

Jones is to note of “a truly remarkable correspondence between Freud’s conception of the super-ego and Nietzsche’s exposition of the origin of the bad conscience,” Another analyst, Anzieu, offers a summary of Nietzsche’s anticipation of psychoanalytic concepts: It was Nietzsche who invented the term das Es (the id). He had some understanding of the economic point of view, which comprises discharge, and transfer of energy from one drive to another. However, he believed that aggression and self-destruction were stronger that sexuality. On several occasions he used the word sublimation (applying it to both the aggressive and the sexual instincts). He described repression, but called it inhibition, he talked of the super-ego and of quilt feelings, but called them resentment, bad conscience and false morality. Nietzsche also described, without giving them a name, the turning of drives against oneself, the paternal image, the maternal image, and the renunciation imposed by civilization on the gratification of our instincts. The “superman” was the individual who succeeded in transcending his conflict between established values and his instinctual urges, thus achieving inner freedom and establishing his privately personal morality and scale of values, in other words, Nietzsche foreshadowed what was to be one of the major aims of psychoanalytic treatment.

While there is a growing body of literature examining the relationship between the writings of Freud and Nietzsche, there has appeared no detailed, comprehensive study on the extent to which Freud may have been influenced by Nietzsche through the course of his life and the complex nature of Freud’s personal and intellectual relationship to Nietzsche. In part this may be attributed to Freud’s assurances that he had never studied Nietzsche, had never been able to get beyond the fist half page or so of any of his works due both in the overwhelming wealth of ideas and to the resemblance of Nietzsche’s ideas to the findings of psychoanalysis. In other words, Freud avoided Nietzsche in part to preserve the autonomy of the development of his own ideas.

Nietzsche and Freud were influenced by many of the same currents of nineteenth-century thought. Both were interested in ancient civilization, particularly Greek culture. Both were interested in Greek tragedy (and debates about catharsis), both particularly drawn to the figure of Oedipus. Both were interested in and attracted to heroic figures and regarded themselves as such. Both held Goethe in the highest regard, of course. They were influenced by Darwin, evolutionary theory, contemporary theories of energy, anthropology and studies of the origins of civilization. They were influenced by earlier psychological writings, including, possibly those of Hippolyte Taine (1828-1893). They were also influenced by a basic historical sense, “the sense of development and change that was now permeating thinking in nearly every sphere.” They wanted to understand, so to speak, the animal in the human and, as unmaskers, were concerned with matters pertaining to the relation between instinct and reason, conscious and unconscious, rational and irrational, appearance and reality, surface and depth. Both attempted to understand the origins and power of religion and morality, They were influenced by the Enlightenment and the hopes for reason and science while at the same time being influenced d by Romanticism’s preoccupations with the unconscious and irrational. While beginning their career’s in other fields, both came to regard themselves, among other things, as depth psychologists.

All the same, one has to keep in mind the extent to which Nietzsche and Freud were both influenced by forces at work in the German-speaking world of the latter part of the nineteenth century and the extent to which similarities in their thought might be attributed to such factors rathe that Nietzsche having a direct influence upon Freud.

For example, both Nietzsche and Freud were interested in anthropology, both read Sir John Lubbock (1834-1913) and Edward Tylor (1832-1917) and both were influence by the authors. However, an examination of the similarities between Nietzsche and Freud would seem to indicate that there is also the direct influence of Nietzsche upon Freud, so that Wallace, while till writes of Nietzsche’s anticipation of and influence upon Freud. Also, Thatcher, while writing of Nietzsche’s debt to Lubbock, writes specifically of Nietzsche’s, not Lubbock’s, “remarkable” anticipation of an idea central to Freud‘s Future’s of an Illusion.

One can also note Nietzsche’s inclinations to use medical terminology in relation to psychological observation and “dissection”“: At its present state as a specific individual science the awakening of moral observation has become necessary and humans can no longer be spared the cruel sight of the moral dissection table and its knives and forceps. For here the ruled that science that asks after the origin and history of the so-called sensations.

Freud wrote of analysts modelling themselves on the surgeon “who untied all feeling, even his human sympathy, and concentrates hid metal forces on the single aim of performing the operation as skilfully as possible.”:The most successful cases are those in which on process, as it was, without any purpose in view, allows itself to be taken by surprise by any new turn in them, and always supported with an open mind, free from any presuppositions.”

In regard to broad cultural change and paradigm changes Nietzsche was one of the thinkers that herald the effectuality about such changes. In the book on Freud’s social thought, Berliner rites of the changes in intellectual orientation that occurred around 1885, stating that such changes were “reflected in the work of Friedrich Nietzsche. Beliner, goes on to mention some of Nietzsche’s contributions to understanding the human mind, conscience and civilisation’s origin his being representative of ‘uncovering’ or ‘unmasking’ psychology. Berliner concludes, as have other, that: That generation of his [Freud’s] young maturity was permeated with the thought of Nietzsche.”

Nevertheless, although Feud expressed admiration for Nietzsche on a number of occasions, acknowledged his “intuitive” grasp on the concepts anticipating psychoanalysis, placed him among a few of persons he considered great and stated in 1908 that “the degree of introspection achieved by Nietzsche had never been achieved by anyone, nor is it likely ever to be reached again,” he never acknowledges studying specific works of Nietzsche at any length or in any detail what his own thoughts was in regard to specific works or ideas of Nietzsche.

Since whenever an idea of Nietzsche’s that may have influenced Freud is discussed without tracing the influence and development in Nietzsche, and it possibly appearing as if it is being suggested that Nietzsche formulated his ideas without the great help of his forerunners, perhaps taking note of the following words of Stephen Jay Gould regarding our discomfort with evolutionary explanations would be useful at this point: “one reason must reside in our social and psychic attraction to creation myths in it preferences to the evolutionary assemblage for creative myths - . . . identify heroes and sacred places, while evolutionary assemblage provides no palpable particularity, objects as symbols for reverence, worship or patriotism.” Or as Nietzsche put it . . . “Whenever one can see the act of becoming [in contrast to ‘present completeness and perfection’] one grows comparatively cool.

It may, perhaps, be that the imbuing of myth within our lives, in this instance the myth. of the hero (with implications for our relationship to Nietzsche and Freud, the relationships themselves a heroes and Freud’s relationship to Nietzsche), is not so readily relinquished even in the realm of scholarly pursuits, a notion Nietzsche elaborated upon on a number of occasions.

Nietzsche discusses the origins of Greek tragedy in the creative integration of what he refers to as Dionysian and Apollonian forces, named for the representation in the gods Apollo and Dionysus. Apollo is associated with law, with beauty and order, with reason, with self-control and self-knowledge, with the sun and light. Dionysus is associated with orgiastic rites, music, dance and later drama. He is the god of divinity, whom of which is ripped into pieces, dismembered (representing individuation), and whose rebirth is awaiting (the end of individuation) religious rituals associated with him enact and celebrate death, rebirth and rituals associated with crops, including the grape (and wine and intoxication), and with sexuality. Frenzied, ecstatic female worshippers (maenads) ae central to the rituals and celebration. Both gods have a home in Delphi, Dionysus reigning in the winter when his dances are performed there.

In a note from The Will to Power Nietzsche defines the Apollonian and to Dionysian: The word “Dionysia” mean: An urge to unity, a reaching out beyond personality, the every day, social, reality, across the abysmal transitoriness, a passionate-painful overflowing into darker, fuller more floating states, . . . the feeling of the necessary unity of creation and destruction. One contemporary classical scholar writes of “the unity of salvation and destruction, . . . [as] a characteristic feature of all that is tragic.

The word “Apollinian” means: The urge to perfect self-sufficiency, to the typical “individuality” to all that simplified distinguishing, makes strong, closer, unambiguous, typical freedom under the law.

Nietzsche announces, that with admirable frankness that he is no longer a Christian, but he does not wish to disturb anyone’s piece of mind. Nietzsche writes of Strauss’ view of a new scientific man and his “faith” that “the heir of but a few hours, he is ringed around with frightful abysses, and every gaiting step taken ought to make him ask: “Where? From what place. Or to what end? However, rather than facing such frightful questions, Strauss’ scientific man seems to be permitted to such a life on questions whose answer could a bottom be of consequence only to someone assured of eternity. Perhaps in knowing, it, also tended to encourage the belief that, as once put, that in all men dance to the tune of an invisible piper, least of mention, many things must be taken to consider that all things must be known, in that the stray consequences of studying them will disturb the status quo, which can never therefore be discovered. History is not and cannot be determined. The supposed causes may only produce the consequences we expect.

Perhaps of even a grater importance resides of Human, All Too Human. We have already commented of sublimation, however, to explicate upon a definite definition of such rights to or for sublimation it seems implicitly proper to state that sublimation would modify the natural expression of (a primitive, instinctual impulse) in a socially acceptable manner, and thus to divert the energy associated with (an unacceptable impulse or drive) into a personally and socially acceptable activity. It is nonetheless, as Young points out, that Nietzsche heralds a new methodology. He contrasts metaphysical philosophy with his historical [later genealogical] philosophy. His is a methodology for philosophical inquiry into the origins of human psychology, a methodology to be separated with natural sciences. This inquiry “can no longer be separated from natural science,” and as he will do on other occasions, he offers a call to those who might have the ears to hear: “Will there be many who desire to purse such researchers? People likes to put questions of origins and beginnings out of its mind, must one not be almost inhuman to detect in oneself a contrary inclination?

Nietzsche writes of the anti-nature of the ascetic ideal, how it relates to a disgust with itself, its continuing destructive effect upon the health of Europeans, and how it related to the realm of “subterranean revenge” and ressentiment. Nietzsche writes of the repression of instincts (though not specifically of impulses toward sexual perversions) and of their bring turned inward against the self, “instinct for freedom forcibly made latently . . . this instinct for freedom-pushed back and repressed.” Also, “this hatred of the human, and even more is the animal, yet, and, still of the material.” Zarathustra also speaks of the tyranny of the holy or sacred”: He once loved ‘thou shalt’ as most sacred, now he mut finds illusion and caprice even in the most sacred, that freedom from his love may become his prey, the lion is needed for such prey. It would appear that while Freud’s formation as it pertains to sexual perversions and that incest is most explicitly not driven from Nietzsche (although along different line incest was an important factor in Nietzsche’ understanding of Oedipus), the relating of the idea of the holy to the sacrifice or repression of instinctual freedom was very possibly influenced by Nietzsche, particularly in light of Freud’s reference to the ‘holy’ as well as to the ‘overman’. These issues were also explored in The Antichrist that hd been published just to years earlier. In addition, Freud wrote, and, perhaps for the first time, of sublimation: “In have gained a sure inking of the structure of hysteria. Everything goes back to the reproduction of scenes. Some can be obtained directly, other ways by fantasies are weighed up in front of them. The fantasies stem from things that have been heard but understood subsequently, and all their material is of course genuine. They are protective structures, sublimation’s of the fact, embellishment of them, and at the same time serve for self-relief.”

Nietzsche had written of sublimation and he specifically wrote of the sublimation of sexual drives in the Genealogy. Freud’s use of the term differs slightly from his later and more Nietzsche an usage such as in Three Essays on the Theory of Sexuality, but as Kaufmann notes, while “the word is older an either Freud or Nietzsche . . . it was Nietzsche who first gave it the specific connotation it has today.” Kaufmann regards the concept of sublimation as one of the most important concepts in Nietzsche’s entire philosophy. Furthermore, Freud wrote that a ‘presentiment’ tells him, “I shall very soon uncover the source of morality’, this is the very subject of Nietzsche’s Genealogy.

At a later time in his life Freud claimed he could not read more than a few passages of Nietzsche due to being overwhelmed by the wealth of ideas. This claim might be supported by the fact that Freud demonstrates only a limited understanding of certain of Nietzsche’s concepts. For example, his reference to the “overman” to which demonstrates a lack of understanding of the overman as a being of the future whose freedom involves creative self-overcoming and sublimation, not simply freely gratified primitive instincts. Later in life, in Group Psychology and the Analysis of the Ego. Freud demonstrates a similar misunderstanding in his equating the overman with the tyrannical father of the primal horde. Perhaps Freud confused the overman with the “master” whose morality is contrasted with that of “slave” morality in the Genealogy and Beyond Good and Evil. The conquering master more freely gratifies instinct and affirms himself, his world and his values as good. The conquering slave, unable to express himself freely, creates a negating, resentful, vengeful morality glorifying his own crippled, alienate condition, and he creates a division not between good (noble) and bad (contemptible), but between good (undangerous) and evil (wicked and powerful - dangerous slave moralities’ at times . . . occur within a singe soul).

Although Nietzsche never gave dreams anything like the attention and analysis given by Freud, he was definitely not one of, “the dark forest of authors who do not see the trees, hopefulessly lost on wrong tracks.” Yet, where he is reviewing the literature on dream, as well as throughout his life, Freud will not, in specific and detailed terms, discuss Nietzsche’s ideas as they pertain to psychoanalysis, just as he will never state exactly when he read or did not read Nietzsche or what he did or did not read. We may never know which of Nietzsche’s passages on dreams Freud may have read or heard of or read of as he was working on The Interpretation of Dreams. Freud’s May 31, 1897, a letter to Fliess includes reference to the overman, contrasting this figure with the saintly or holy which is (as is civilization) connected to instinctual renunciation, particularly incest and sexual perversion. Freud also writes that he has a presentiment that he shall “soon uncover the source of morality,” the subject of Nietzsche’s Genealogy. Earlier, he made what may have been his first reference to sublimation, a concept explored and developed by Nietzsche. We have also pointed to the possible, perhaps even likely, allusions to Nietzsche in letters of September and November 1897 which refer respectively to Nietzsche’s notion of a revaluation or transvaluations of all values and Nietzsche’s idea of his relationship of our turning our nose away from what disgust us, our own filth, to our civilized condition, our becoming “angles.” Nonetheless, Freud adds specifically that so too consciousness turns away from memory: “This is repression.” Then there is Nietzsche’s passage on dreams in which he refers to Oedipus and to the exact passage that Freud refers to in The Interpretation of Dreams. One author has referred to Nietzsche’s idea as coming “preternaturally close to Freud.” At a later point we see that in Freud’s remarks in The Interpretation of Dreams on the distinctiveness of psychoanalysis and his achievements regarding the understanding of the unconscious (his unconscious versus the unconscious of philosophers), Nietzsche is perhaps made present through his very absence.

These ideas of Nietzsche’s on dreams are not merely of interest in regard to the ways in which they anticipate Freud. They are very much related to more recent therapeutic approaches to the understanding of dreams: Nietzsche values dreaming states over waking states regarding the dream’s closeness to the “ground of our being,” the dream “informs” us of feelings and thoughts that “we do not know or feel precisely while awake,” in dreams “there is nothing unimportant or superfluous,” the language of dreams entails ‘chains of symbolical scenes’ and images in place of [and akin to] the language of poetic narration, content, form, duration, performer, spectator - in these comedies you are all of this yourself (and these comedies include the “abominable”). Recent life experiences and tensions, “the absence of nourishment during the day, gives rise to these dream inventions which “give scope and discharge to our drives.”

The self, as in its manifestations in constructing dreams, may be an aspect of our psychic lives that knows things that our waking of “I” or ego may not know an may not wish to know, and a relationship may be developed between these aspects of our psychic lives in which the later opens itself creatively to the communications of the former. Zarathustra states: “Behind your thoughts and feelings, my brother, there stands a mighty ruler, an unknown sage - whose name is self. In your body he dwells, he is your body.” However, Nietzsche’s self cannot be understood as a replacement for an all-knowing God to whom the “I” or, ego appeals for its wisdom, commandments, guidance and the like. To open onself to another aspect oneself that is wiser (“an unknown sage”) in the sense that new information can be derived from it, does not necessarily entail that this “wiser” component of one’s psychic life has God-like knowledge and commandments which if one (one’s “I”) interprets and opens correctly a will set one on the straight path. It is true though that what Nietzsche writes of the self as “a mighty ruler and unknown sage” he does open himself to such an interpretation and even to the possibility that this “ruler”: is unreachable, unapproachable for the “I?” However, the context of the passage (Nietzsche/Zarathustra redeeming the body) and the two sections thereafter are “On the Despisers of the Body” make it clear that there are aspects of our psychic selves that interpret the body, that mediate its direction, ideally in ways that do not deny the body but that aid in the body doing “what it would do above all else, to create beyond itself.

Nietzsche explored the ideas of psychic energy and drives pressing for discharge. His sublimation typically implies an understanding of drives in just such a sense as does his idea that dreams provide for discharge of drives. However, he did not relegate all that is derived from instinct and the body to this realm. While for Nietzsche there is no stable, enduring true self awaiting discovery and liberation, the body and the self (in the broadest sense of the term, including what is unconscious and may be at work in dreams as Rycroft describes it) may offer up potential communication and direction to the “I” or ego. However, at times Nietzsche describes of the “I” or ego as having very little, if any, idea as to how it is being lived by the “it.”

Nietzsche like Fred, describes two types’ mental processes, on which “binds” [man’s] life to reason and it concept in order not to be swept away by the current and to lose himself, the other, pertaining to the world of myth, art an the dream, “constantly showing the desire to shape the existing world of the wide-a-wake person to be variegatedly irregular and disinterestedly incoherent, exciting and eternally new, as the world of dreams.” Art may function as a “middle sphere” and middle faculty (transitional sphere and faculty) between a more primitive “metaphor-world”: of impressions and the forms of uniform abstract concepts.

All the same, understanding what Freud could mean by not reading Nietzsche in his later years is difficult as well as to determine if his is acknowledged of having read Nietzsche in earlier years. Freud never tells us exactly what he read of Nietzsche and never tells us exactly which years were those during which he avoided Nietzsche. We do know of course, that a few years earlier, in 1908. Freud has read and discussed Nietzsche, including a work of direct relevance to his own anthropological explorations as well as to ideas pertaining to the relationship between repression of instinct and the development of the inner world and conscience. We have also seen that lectures, articles and discussions on Nietzsche continue around Freud. It does seem though that Freud demonstrates a readiness to “forgo all claims to priority” regarding the psychological observations of Nietzsche and others that the science of psychoanalysis has confirmed.

Nevertheless, Nietzsche recognized the aggressive instinct and will to power in various forms and manifestations, including sublimated mastery, all of which are prominent in Freud’s writings.

We can also take note with which the work Freud ascribed of the power and importance of rational thinking and scientific laws. Freud writes that the World-View erected upon science conceals the “submission to the truth and rejection of illusions.” He writes, quoting Goethe, of “Reason and Science, the highest strength possessed by man,” and of “the bright world governed by relentless laws which has been constructed for us by science.” However, he also writes discipline, and a resistance stirs within us against the relentlessness nd monotony of the laws of thought and against the demands of reality-testing. Reason becomes the enemy which withholds from us so many possibilities of pleasure.

However, bright the world of science is and however much reason and science represent “the highest strength possessed by man,” this world, these laws, these faculties, require from us “submission” to a withholding enemy that imposes “strict discipline” with “relentlessness and monotony.” However much this language pertains to a description of universal problems in human development, one may wonder it does not reflect Freud’s own experience of the call of reason as a relentless (labouriously) submission.

There is no reason that empirical research cannot be of help in determining what kinds of “self-description” or narratives (as well as, of course, many other aspects of the therapeutic process) may be effective for different kinds of persons with different kinds of difficulties in different kinds of situations. From a Nietzsche an perspective, while it is obvious and desirable that the therapist will influence the patient’s or client ‘s self-description and narratives, and the converse as well, a high value will be placed, however, much it is a joint creation of a shared reality, on encouraging the individual to fashion a self-understanding, self-description or narrative that is to a significant extent of his or her own creation. That on has been creative in this way (and hopefully can go on creating) will be a very different experience than having the therapist narrative is simply replacing the original narrative brought to therapy can be thought of and the individual’s increase capacity for playful creative application of a perspectivist approach to his or her life experience and history, though this approach, as any other, would be understood as detached most significantly and related to the sublimation of drives as an aspect of the pursuit of truth. This does not entail that one that one searches with the understanding that what one finds was not uncovered like an archeological find.

Both Freud and Nietzsche are engaged in a redefinition of the root of subjectivity, a redefinition that replaces the moral problematic of selfishness with the economic problematic of what Freud would call narcissism . . . [Freud and Nietzsche elaborate upon] the whole field of libidinal economy: The transit of libido through other selves, aggression, infliction and reception of pain, and something very much like death (the total evacuations of the entire quantum of excitation with which the organism is charged.)

The id, ego and superego effort to clarify the bewildering number of interrelated observations uncovered by psychoanalytic exploration led to the development of a model of the structure of the psychic system. Three functional systems are distinguished that are conveniently designated as the id, ego, and superego.

The first system refers to the sexual and aggressive tendencies that arise from the body, as distinguished from the mind. Freud called these tendencies ‘Triebe’, which literally means “drives,” but which is often inaccurately translated as “instincts” to indicate their innate character. These inherent drives claim immediate satisfaction, which is experienced as pleasurable; the id thus is dominated by the pleasure principle. In his later writings, Freud tended more toward psychological rather than biological conceptualization of the drives.

How the conditions for satisfaction are to be brought about is the task of the second system, the ego, which is the domain of such functions as perception, thinking, and motor control that can accurately assess environmental conditions. In order to fulfill its function of adaptation, or reality testing, the ego must be capable of enforcing the postponement of satisfaction of the instinctual impulses originating in the id. To defend itself against unacceptable impulses, the ego develops specific psychic means, known as defence mechanisms. These include repression, the exclusion of impulses from conscious awareness; projection, the process of ascribing to others one's own unacknowledged desires; and reaction formation, the establishments of a pattern of behaviour directly opposed to a strong unconscious need. Such defence mechanisms are put into operation whenever anxiety signals a danger that the original unacceptable impulses may reemerge.

An id impulse becomes unacceptable, not only as a result of a temporary need for postponing its satisfaction until suitable reality conditions can be found, but more often because of a prohibition imposed on the individual by others, originally the parents. The totality of these demands and prohibitions constitutes the major content of the third system, the superego, the function of which is to control the ego in accordance with the internalized standards of parental figures. If the demands of the superego are not fulfilled, the person may feel shame or guilt. Because the superego, in Freudian theory, originates in the struggle to overcome the Oedipal conflict, it has a power akin to an instinctual drive, is in part unconscious, and can give rise to feelings of guilt not justified by any conscious transgression. The ego, having to mediate among the demands of the id, the superego, and the outside world, may not be strong enough to reconcile these conflicting forces. The more the ego is impeded in its development because of being enmeshed in its earlier conflicts, called fixations or complexes, or the more it reverts to earlier satisfactions and archaic modes of functioning, known as regression, the greater is the likelihood of succumbing to these pressures. Unable to function normally, it can maintain its limited control and integrity only at the price of symptom formation, in which the tensions are expressed in neurotic symptoms.

Nietzsche suggests that in our concern for the other, in our sacrifice for the other, we are concerned with ourselves, one part of ourselves represented by the other. That for which we sacrifice ourselves is unconsciously related to as another part of us. In relating to the other we are in fact relating to a part of ourselves and we are concerned with our own pleasure and pain and our own expression of will to power. In one analysis of pity Nietzsche states that, we are, to be sure, not consciously thinking of ourselves tat it is primarily our own pleasure and pain that we are concerned about and that feelings an reactions that ensue are concerned about and that feelings and reactions that ensue are multi-determined.

Nietzsche has divided nature and that we respond to others in part on the basis of projecting and identifying with aspects of ourselves in them. From Human, All Too Human, Nietzsche writes to a deception in love - We forget a great deal of our own past and deliberately banish it from our minds . . . we want the image of ourselves that shines upon us out of the past to deceive us and flatter our self-conceit - we are engaged continually on this self-deception. Do you think, you who speak so much of ‘self-forgetfulness in love’, of ‘the merging of the ego in the other person’, and laud it so highly, do you think this is anything essentially differently? We shatter the mirror, impose our self upon someone we admire, and then enjoy our ego’s new image, even though we may call it by that other person’s name.

It is commonplace that beauty lies in the eye of the beholder, but all the same, we valuably talk of the beauty of a thing and people as if they are identifiable real properties which they possess. Projectivism denotes any view which sees us similarly projecting upon the world what is in fact modulations of our own minds. According to this view, sensations are displaced from their rightful place in the mind when we think of the world as coloured or noisy. Other examples of the idea involve things other than sensations, and do not consist of any literal displacement. One is that all contingency is a projection of our ignorance, another is that the causal order of events in a projection of our mental confidences in the way they follow from one another. However, the most common application of the idea is in ethics and aesthetics, where man writers have held that talk of the value or beauty of things is a projection of the attitudes we take toward them and the pleasure we take in them.

It is natural to associate Projectivism with the idea that we make some kind of mistake in talking and thinking as if the world contained the various features we describe it as having, when in reality it does not. Only, that the view that we make is no mistake, but simply adopt efficient linguistic expression for necessary ways of thinking, is also held.

Nonetheless, in the Dawn, Nietzsche describes man, in the person of the ascetic, as ‘split asunder into a sufferer and a spectator’, enduring and enjoying within (as a consequence of his drive for ‘distinction’, his will to power) that which the barbarian imposes on others. As Staten points out, Nietzsche asks if the basic disposition of the ascetic and of the pitying god who creates suffering humans can be held simultaneously, an that one would do ‘hurt to others in order thereby to hurt oneself, in order then to triumph over oneself and one’s pity and revel in an extremity of power. Nietzsche appears to be suggesting that in hurting the other In may, through identification, be tempting to hurt one part of myself, so that whatever my triumph over the other, In may be as concerned with one part of my self-triumphing over that par of myself In identify within the other as well as there by overcoming pity and in consequence ‘revel in an extremity of power.’ (Or in a variation of such dynamics, as Michel Hulin has put it, the individual may be ‘tempted to play both roles at once, contriving to torture himself in order to enjoy all the more his own capacity for overcoming suffering’.)

In addition to Nietzsche’s writing specifically of the sublimation of the libidinous drive, the will to power and it vicissitudes are described at times in ways related to sexually as well as aggressive drives, particularly in the form of appropriation and incorporation. As Staten points out, this notion of the primitive will to power is similar to Freud’s idea in Group Psychology and the Analysis of the Ego according to which, ‘identification [is] the earliest expression of an emotional tie with another person . . . It behaves like a derivation of the first oral phase of the organization of the libido, in which the object that we long for and prize is assimilated by eating. It would appear that Nietzsche goes a step further than Freud in one of his notes when he writes: ‘Nourishment -is only derivative, the original phenomenon is, to desire to incorporate everything’. Staten also concludes that, ‘if Freudian libido contains a strong element of aggression and destructiveness, Nietzsche an will to power never takes place without a pleasurable excitation that there is no reason not to call erotic. However, that of ‘enigma and cruelty’, that it is only imposed on the beloved object and increases in proposition to the love . . . Cruel people being always masochist also, the whole thing is inseparable from bisexuality: One can only imagine how far Nietzsche and to what extent he would expand of insights other than Freud.

Freud’s new orientation was preceded by his collaborative work on hysteria with the Viennese physician Josef Breuer. The work was presented in 1893 in a preliminary paper and two years later in an expanded form under the title Studies on Hysteria. In this work the symptoms of hysteria were ascribed to manifestations of undischarged emotional energy associated with forgotten psychic traumas. The therapeutic procedure involved the use of a hypnotic state in which the patient was led to recall and reenact the traumatic experience, thus discharging by catharsis the emotions causing the symptoms. The publication of this work marked the beginning of psychoanalytic theory formulated based on clinical observations.

From 1895 to 1900 Freud developed many concepts that were later incorporated into psychoanalytic practice and doctrine. Soon after publishing the studies on hysteria he abandoned the use of hypnosis as a cathartic procedure and substituted the investigation of the patient’s spontaneous flow of thoughts, called free association, to reveal the unconscious mental processes at the root of the neurotic disturbance.

Nietzsche discusses the origins of Greek tragedy in the creative integration of what he calls Dionysian and Apollonian forces. Apollo is associated with law, with pounding order, with reason with containing knowledge, with the sun and light. Dionysus is associated with orgastic rites, music, dance and later drama. Religious rituals associated with him enact and celebrate death, rebirth and fertility. He is also associated with crops, including the grape (and the wine of intoxication), and with sexuality. Frenzied, ecstatic female worshippers (maenads) are central to the rituals and celebrations.

In a note from The Will to Power Nietzsche brings to light the Apollonian and the Dionysian as: The word ‘Dionysian’ is meant of an urge to unity, a reaching out beyond personality, the every day, society, reality, across the abyss of transitoriness: A passionate-painful overflowing into dark, Nietzsche more floating stats, . . . the feeling of the necessary unity of creation and destruction. [One contemporary classical scholar writes of ‘the unity of salvation and destruction . . . (as) a characteristic feature of all that is tragic.]

The word ‘Apollinian’ is meant, among other things, as the urge to perfect -sufficiency, to the typical ‘individual’, to all that simplifies, distinguishes, makes strong, clear, unambiguous, typical, and freedom under the law. Apollo is described as a dream interpreter.

Yet, all the same, we might discern Nietzsche’s influence in an important paper of this period, the 1914 paper ‘On Narcissism: An Introduction. In this paper, Freud explores, among other things, the effects of his finding of an original libidinal cathexis of the ego, from which some is later given off to objects, which fundamentally persists and is related to the object-cathexes much as the body of an amoeba is related to the pseudopodia out which it puts.

The development of the ego consists in a departure from the primary narcissism and results in a vigorous attempt to recover that state. Means of the displacement cause this departure of the libido onto an ego-ideal imposed from without, and satisfaction is caused from fulfilling this ideal. Simultaneously, the ego has sent out the libidinal object-cathexes. It becomes impoverished in favour of these cathexes, just as it does in favour of the ego-ideal, and it enriches it again from it satisfaction in respect of the object, just as it does by fulfilling its ideal.

Freud considers the implications of these findings for his dual instinct theory that divides instincts into the duality of ego instincts and libidinal instincts. Freud questions this division, but does not definitely abandon it, which he will later do in, Beyond the Pleasure Principle.

As indicated, one of Freud’s important points is that the ego tries to recover its state of primary narcissism. This is related to important theme s running through Nietzsche’s writings. Nietzsche is aware of ho we relate to others based on projections of idealized images of ourselves, and he is consistently looking for the way in which we are loving ourselves and aggrandizing ourselves in activities that reflect contrary motivations.

Nietzsche attempts to show that Greek culture and drama had accomplished the great achievement of recognising and creatively integrating the substratum of the Dionysian with the Apollonian. As Siegfried Mandel construed to suggest, Nietzsche destroyed widely held aesthetic views, inspired in 1755 by the archaeologist-historian Johann Winckelmann, about the ‘noble simplicity, calm grandeur’, ‘sweetness and light’, harmony and cheerfulness of the ancient Greeks and posed instead the dark Dionysia force’s that had to be harnessed to makes possible the birth of tragedy.

It is also important to consider that it is through the dream’s Apollonian images that the Dionysian reality can be manifested and known, as it is through the individuated actors on stage that the underlying Dionysian reality is manifested in Greek tragedy. As it is most creative, the Apollonian can allow an infusion of the harnesses in the Dionysian, but we should also note that Nietzsche is quite explicit that when the splendour of the Apollonian impulse is stood before an art that in it frenzies, rapture and excess ‘spoke the truth -. Excess revealed it as truth’. The Dionysian, and . , . . against this new power the Apollonia rose to the austere majesty of Doric art and the Doric view of the world. For Nietzsche, ‘Dionysian and the Apollonian, in new births ever following and mutually augmenting of one, another, controls led the Hellenic genius.’

Nietzsche is unchallenged as the most sightful and powerful critics of the moral climate of the 19th century (and of what remains in ours). His exploration of bringing forth an acknowledged unconscious motivation, and the conflict of opposing forces within the mindful purposes of possibilities of creative integration. Nietzsche distinguishes between two types of mental processes and is aware of the conflict between unconscious instinctual impulses and wishes and inhibiting or repressing forces. Both Freud and Nietzsche are engaged in a redefinition of the root of subjectivity, a redefinition that replaces the moral problem of issues concerning the economic problem of what Freud would call narcissism, . . . Freud and Nietzsche elaborate upon the whole field of libidinal economy: The transit of the libido through other selves, aggression, infliction and reception of pain, and something very much like death, the total evacuation of the entire quantum of excitation that the organism is charged.

The real world is flux and change for Nietzsche, but in his later works there is no “unknowable true world.” Also, the splits between a surface, apparent world and an unknowable but a true world of the things-in-themselves were, as is well known, a view Nietzsche rejected. For one thing, as Mary Warnock points out, Nietzsche was attempting to get across the point that there is only one world, not two. She also suggests that for Nietzsche, if we contribute anything to the world, it be the idea of a “thing,” and in Nietzsche’s words, “the psychological origin of the belief in things forbids us to speak of things-in-themselves.”

Nietzsche holds that there is an extra-mental world to which we are related and with which we have some kind of fixation. For him, even as knowledge develops in the service of - preservation and power, to be effective, a conception of reality will have a tendency to grasp (but only) a certain amount of, or aspect of, reality. However much Nietzsche may at times see (the truth of) artistic creation and dissimulation (out of chaos) as paradigmatic for science (which will not recognize it as such), in arriving art this position Nietzsche assumes the truth of scientifically based beliefs as a foundation for many of his arguments, including those regarding the origin, development and nature of perception, consciousness and - consciousness and what this entails for our knowledge of and falsification of the external and inner world. In fact, to some extent the form-providing, affirmative, this-world healing of art is a response to the terrifying, nausea-inducing truths revealed by science that by it had no treatment for the underlying cause of the nausea. Although Nietzsche also writes of the horrifying existential truths, against which science can attempt a [falsifying] defence. Nevertheless, while there is a real world to which we are affiliated, there is no sensible way to speak of a nature or constitution or eternal essence of the world by it apart from description and perceptive. Also, states of affairs to which our interpretations are to fit are established within human perspectives and reflect (but not only) our interests, concerns, needs for calculability. While such relations (and perhaps as meta-commentary on the grounds of our knowing) Nietzsche is quite willing to write of the truth, the constitution of reality, and facts of the case. There appears of no restricted will to power, nor the privilege of absolute truth. To expect a pure desire for a pure truth is to expect an impossible desire for an illusory ideal.

In the articulation comes to rule supreme in oblivion, either in the individual’s forgetfulness or in those long stretches of the collective past that have never been and will never be called forth into the necessarily incomplete articulations of history, the record of human existence that is profusely interspersed with dark passages. This accounts for the continuous questing of archeology, palaeontology, anthropology, geology, and accounts, too, for Nietzsche’s warning against the “insomnia” of historicisms. As for the individual, the same drive is behind the modern fascination with the unconscious and, thus, with dreams, and it was Nietzsche who, before Freud, spoke of forgetting as an activity of the mind. At the beginning of his, Genealogy of Morals, he claims, in defiance of all psychological “shallowness,” that the lacunae of memory are not merely “passive” but the outcome of an active and positive “screening,” preventing us from remembering what would upset our equilibrium. Nietzsche is the first discoverer of successful “repression,” the burying of potential experience in the articulation, that is, as moderately when the enemy territory is for him.

Still, he is notorious for stressing the ‘will to power’ that is the basis of human nature, the ‘resentment’ that comes once it is denied of its basis in action, and the corruptions of human nature encouraged by religions, such as Christianity, that feed on such resentment. Yet the powerful human being who escapes all this, the ‘Übermensch’, is not the ‘blood beast’ of later fascism: It is a human being who has mastered passion, risen above the senseless flux, and given creative style of his or her character. Nietzsche’s free spirits recognize themselves by their joyful attitude to eternal return. He frequently presents the creative artist than the world warlord as his best exemplar of the type, but the disquieting fact remains that he seems to leave him no words to condemn any uncaged beast of prey who vests finds their style by exerting repulsive power over others. Nietzsche’s frequently expressed misogyny does not help this problem, although in such matters the interpretation of his many-layered and ironic writing is not always straightforward. Similarly, such anti-Semitism, as found in his work is in an equally balanced way as intensified denouncements of anti-Semitism, and an equal or greater contempt of the German character of his time.

Nietzsche’s current influence derives not only from his celebration of the will, but more deeply from his scepticism about the notions of truth and fact. In particular, he anticipated many central tenets of postmodernism: An aesthetic attitude toward the world that sees it as a ‘text’, the denial of facts: The denial of essences, the celebration of the plurality of interpretations and of the fragmented and political discourse all for which are waiting their rediscovery in the late 20th century. Nietzsche also has the incomparable advantage over his followers of being a wonderful stylist, and his perspectives are echoed in the shifting array of literary devices - humour, irony, exaggeration, aphorisms, verse, dialogue, parody with which he explores human life and history.

All the same, Nietzsche is openly pessimistic about the possibility of knowledge: ‘We simply lack any organ for knowledge, for ‘truth’: We ‘know’ (or believe or imagine) just as much as may be useful in the interests of the human herd, the species, and perhaps precisely that most calamitous stupidity of which we shall perish some day’ (The Gay Science).

Nonetheless, that refutation assumes that if a view, as perspectivism it, is an interpretation, it is by that very fact wrong. This is not so, however, an interpretation is to say that it can be wrong, which is true of all views, and that is not a sufficient refutation. To show the perspectivism is really false producing another view superior to it on specific epistemological grounds is necessary.

Perspectivism does not deny that particular views can be true. Like some versions of contemporary anti-realism, it attributes to specific approaches’ truth in relation to facts themselves. Still, it refused to envisage a single independent set of facts, and accounted for by all theories. Thus, Nietzsche grants the truth of specific scientific theories: He does, however, deny that a scientific interpretation can possibly be ‘the only justifiable interpretation of the world’: Neither the fact’s science addresses nor the methods serve the purposes for which they have been devised: Nonetheless, these have no priority over the many others’ purposes of human life.

Every schoolchild learns eventually that Nietzsche was the author of the shocking slogan, "God is dead." However, what makes that statements possible are another claim, even more shocking in its implications: "Only that which has no history can be defined" (Genealogy of Morals). Since Nietzsche was the heir to seventy-five years of German historical scholarship, he knew that there was no such thing as something that has no history. Darwin had, as Dewey points out that effectively shows that searching for a true definition of a species is not only futile but unnecessary (since the definition of a species is something temporary, something that changes over time, without any permanent lasting and stable reality). Nietzsche dedicates his philosophical work to doing the same for all cultural values.

Reflecting it for a moment on the full implications of this claim is important. Its study of moral philosophy with dialectic exchange that explores the question "What is virtue?" That takes a firm withstanding until we can settle that of the issue with a definition that eludes all cultural qualification. What virtue is, that we cannot effectively deal with morality, accept through divine dispensation, unexamined reliance on traditions, skepticism, or relativism (the position of Thrasymachus). The full exploration of what deals with that question of definition might require takes’ place in the Republic.

Many texts we read subsequently took up Plato's challenge, seeking to discover, through reason, a permanent basis for understanding knowledge claims and moral values. No matter what the method, as Nietzsche points out in his first section, the belief was always that grounding knowledge and morality in truth was possible and valuable, that the activity of seeking to ground morality was conducive to a fuller good life, individually and communally.

To use a favourite metaphor of Nietzsche's, we can say that previous systems of thought had sought to provide a true transcript of the book of nature. They made claims about the authority of one true text. Nietzsche insists repeatedly that there be no single canonical text; There are only interpretations. So, there is no appeal to some definitive version of Truth (whether we search in philosophy, religion, or science). Thus the Socratic quest for some way to tie morality down to the ground, so that it does not fly away, is (and has always been) futile, although the long history of attempts to do so has disciplined the European mind so that we, or a few of us, are ready to move into dangerous new territory where we can situate the most basic assumptions about the need for conventional morality to the test and move on "Beyond Good and Evil," that is, to a place where we do not take the universalizing concerns and claims of traditional morality seriously.

Nietzsche begins his critique here by challenging that fundamental assumption: Who says that seeking the truth is better for human beings? How do we know an untruth is not better? What is truth anyway? In doing so, he challenges the sense of purpose basic to the traditional philosophical endeavour. Philosophers, he points out early, may be proud of the way they begin by challenging and doubting received ideas. However, they never challenge or doubt the key notion they all start with, namely, that there is such a thing as the Truth and that it is something valuable for human beings (surely much more valuable than its opposite).

In other words, just as the development of the new science had gradually and for many painfully and rudely emptied nature of any certainty about a final purpose, about the possibilities for ever agreeing of the ultimate value of scientific knowledge, so Nietzsche is, with the aid of new historical science (and the proto-science of psychology) emptying all sources of cultural certainty of their traditional purposiveness and claims to permanent truth, and therefore of their value, as we traditionally understood that of the term. There is thus no antagonism between good and evil, since all versions of equal are equally fictive (although some may be more useful for the purposes of living than others).

At this lodging within space and time, In really do not want to analyse the various ways Nietzsche deals with this question. Nevertheless, In do want to insist upon the devastating nature of his historical critique on all previous systems that have claimed to ground knowledge and morality on a clearly defined truth of things. For Nietzsche's genius rests not only on his adopting the historical critique and applying to new areas but much more on his astonishing perspicuity in seeing just how extensive and flexible the historical method might be.

For example, Nietzsche, like some of those before him, insists that value systems are culturally determined they arise, he insists, as often as not form or in reaction to conventional folk wisdom. Yet to this he adds something that to us, after Freud, may be well accepted, but in Nietzsche's hands become something as shocking: Understanding of a system of value is, he claims, requires us more than anything else to see it as the product of a particular individual's psychological history, a uniquely personal confession. Relationship to something called the "Truth" has nothing to do with the "meaning" of a moral system; as an alternative we seek its coherence in the psychology of the philosopher who produced it.

Gradually, in having grown into a greater clarity of what every great philosophy has endearingly become, as staying in the main theme of personal confessions, under which a kind of involuntary and an unconscious memoir and largely that the moral (or immoral) intentions in every philosophy formed the real germ of life from which the whole plant had grown.

A concentration has here unmasked claims to “truth” upon the history of the life of the person proposing the particular "truth" this time. Systems offering us a route to the Truth are simply psychologically produced fictions that serve the deep (often unconscious) purposes of the individual proposing them. Therefore they are what Nietzsche calls "foreground" truths. They do not penetrate into the deep reality of nature, and, yet, to fail to see this is to lack "perspective."

Even more devastating is Nietzsche's extension of the historical critique to language it. Since philosophical systems deliver themselves to us in language, that language shapes them and by the history of that language. Our Western preoccupation with the inner for which perceivable determinates, wills, and so forth, Nietzsche can place a value on as, in large part, the product of grammar, the result of a language that builds its statements around a subject and a predicate. Without that historical accident, Nietzsche affirms, we would not have committed an error into mistaking for the truth something that is a by-product of our particular culturally determined language system.

He makes the point, for example, that our faith in consciousness is just an accident. If instead of saying "In think," we were to say "Thinking is going on in my body," then we would not be tempted to give the "In," some independent existence, (e.g., in the mind) and make large claims about the ego or the inner. The reason we do search for such an entity stem from the accidental construction of our language, which encourages us to use a subject (the personal pronoun) and a verb. The same false confidence in language also makes it easy for us to think that we know clearly what key things like "thinking" and "willing" are; Whereas, if we were to engage in even a little reflection, we would quickly realize that the inner processes neatly summed up by these apparently clear terms is anything but clear. His emphasis on the importance of psychology as queen of the sciences underscores his sense of how we need to understand more fully just how complex these activities are, particularly the emotional appetites, before we talk about them so simplistically, the philosophers that concurrently have most recently done.

This remarkable insight enables Nietzsche, for example, at one blow and with cutting contempt devastatingly to dismiss as "trivial" the system Descartes had set up so carefully in the Meditations. Descartes's triviality consists in failing to recognize how the language he imprisons, shapes his philosophical system as an educated European, using and by his facile treatment of what thinking is in the first place. The famous Cartesian dualism is not a central philosophical problem but an accidental by-product of grammar designed to serve Descartes' own particular psychological needs. Similarly Kant's discovery of "new faculties" Nietzsche derides as just a trick of language - a way of providing what looks like an explanation and is, in fact, as ridiculous as the old notions about medicines putting people to sleep because they have the sleeping virtue.

It should be clear from examples like this (and the others throughout), which there is very little capability of surviving Nietzsche's onslaught, for what are there to which we can points to which did not have a history or deliver it to us in a historically developing system of language? After all, our scientific enquiries in all areas of human experience teach us that nothing is ever, for everything is always becoming.

Nietzsche had written that with repression of instincts and their turn inward, ‘the entire inner worlds, originally as thin as if it were stretched between two membranes, expanded and extended it, acquired depth, breadth, and heighten the same writing of a ’bad conscience’ . . . [as] the womb of all ideal nd imaginative phenomena . . . an abundance of strange new beauty and affirmation and perhaps beauty it.

The developments in the finding of an original libidinal cathexis of the ego, from which some is later given off to object but fundamentally persists and is related to the object-cathexes much as the body of an amoeba is related to the pseudopodia in which it puts out.

The development of the ego consists in a departure from the primary narcissism and result in a vigorous attempt to recover that state. This departure is caused by means of the displacement of the libido onto an ego-ideal imposed from without, and satisfaction is caused from fulfilling this ideal.

While the ego has sent out the libidinal object-cathexes, it becomes impoverished in favour of these cathexes’ it again from its satisfactions in respect of the object, just as it does by fulfilling its ideal.

Freud considers the implications of such finds for his dual instinct Theory that divides instincts into the duality of ego instinct and libidinal instincts. Freud questions this division, but does not definitely abandon it, which he will do in beyond the Pleasure Principle.

As indicted, one of Freud’s important points is that the ego attempts to recover its state of primary narcissism. This is related to important themes we relate to others based on projections of idealized images we are loving ourselves and aggrandizing ourselves in activities that reflect contrary motivations.

As a mother gives to her child that of which she deprives her . . . is it does not clear that in [such] instances man loves something of him . . . more than something else of themselves . . . the inclinations for something (wishes, impulse, desire) is present in all [such] instances to give in to it, with all the consequences, are in any even not ‘unegoistic’.

As Freud is entering his study of the destructive instincts - the death instinct and its manifestations outward as aggression a well as its secondary turn back inward upon it - might wonder if Nietzsche, who had explored the vicissitude’s of aggression and was famous for his concept of will to power, was among the ‘all kinds of things’ Freud was reading. At least Freud clearly had the ‘recurrence of the same’ on his mind during his period, while pessimism and relevance on pleasure during this period. While Freud’s through release of or discharge of and decreases of tension have strong affinities with Schopenhauer, there is the comparatively different ‘pleasure ‘of Eros’.

One point to be made is that Nietzsche’s concept of the will to power was an attempt to go beyond the pleasure principle and beyond good and evil. A principle of which, as for Nietzsche the primary drives to ward-off its primitive and more sublimated manifestations. All the same, pain is an essential ingredient since it is not a state attained at the end of suffering but the process of overcoming it (as of obstacles and suffering) that the central factor in the experience of an increase of power and joy.

Freud writes of as no other kind or level of mastery, the binding of instinctual impulses that is a preparatory act. Although this binding and the replacement of primary process with the secondary process operate before and without necessary regard for ‘the development of unpleasure, the transformation occurs on behalf of the pleasure principle, the binding is the preparatory act that introduces and assures the dominance of the pleasure principle’ . . . The binding . . . [is] designed to preparatory excitement for its final elimination in the pleasure of discharge.

For the individual who suffers this repeated and frustrated effect of pleasure, it is not only the object of the past that cannot be recovered, nor the relation that cannot be restored or reconstructed. Nevertheless, it is time it that resists the human ill and proves is unyielding. Between pleasure and satisfaction, a prohibition or negation of pleasure is enacted which necessitates the endless repetition and proliferation of thwarted pleasures. The repetition is a vain effort to stay, or to reverse time, such repetition reveals a rancour against the present that feeds upon it.

However at this point we might be tempted, as many have been, to point to the new natural science as a counter-instance that typifies the dulling of natural science of a progressive realization of the truth of the world, or at least a closer and closer approximation to that truth? In fact, it is interesting to think about just how closely Kuhn and Nietzsche might be linked in their views about the relationship between science and the truth of things or to what extent modern science might not provide the most promising refutation of Nietzsche's assertion that there is no privileged access to a final truth of things (a hotly disputed topic in the last decade or more). It tells us say here that for Nietzsche science is just another "foreground" way of interpreting nature. It has no privileged access to the Truth, although he does concede that, compared with other beliefs, it has the advantage of being based on sense experience and therefore is more useful for modern times.

There is one important point to stress in this review of the critical power of Nietzsche's project. Noting that Nietzsche is not calling us to a task for having beliefs is essential. We have to have beliefs. Human life must be the affirmation of values; Otherwise, it is not life. Nonetheless, Nietzsche is centrally concerned to mock us for believing that our belief systems are True, are fixed, are somehow eternally right by a grounded standard of knowledge. Human life, in its highest forms, must be lived in the full acceptance that the values we create for ourselves are fictions. We, or the best of us, have to have the courage to face the fact that there is no "Truth" upon which to ground anything in which we believe; we must in the full view of that harsh insight, but affirm ourselves with joy. The Truth is not accessible to our attempts at discovery; What thinking human beings characteristically do, in their pursuit of the Truth, is creating their own truths.

Now, this last point, like the others, has profound implications for how we think of ourselves, for our conception of the human. Because human individuals, like human cultures, also have a history. Each of us has a personal history, and thus we ourselves cannot be defined; we, too, are in a constant process of becoming, of transcending the person we have been into something new. We may like to think of ourselves as defined by some essential rational quality, but in fact we are not. In stressing this, of course, Nietzsche links him with certain strains of Romanticism, especially (from the point of view of our curriculum) with William Blake.

This tradition of Romanticism holds up a view of life that is radically individualistic, -created, - generated. "In must create my own system or become enslaved by another man's" Blake wrote. It is also thoroughly aristocratic, with little room for traditional altruism, charity, or egalitarianism. Our lives to realize their highest potential should be lived in solitude from others, except perhaps those few we recognize as kindred souls, and our life's efforts must be a spiritually demanding but joyful affirmation of the process by which we maintain the vital development of our imaginative conceptions of ourselves.

Contrasting this view of a constantly developing entity might be appropriate, but without essential permanence, with Marx's view. Marx, too, insists on the process of transformation of ideas but for him, the material forces control the transformation of production, and these in turn are driven by the logic of history. It is not something that the individual takes charge of by an act of individual will, because individual consciousness, like everything else, emerges form and is dependent upon the particular historical and material circumstances, the stage in the development of production, of the social environment in which the individual finds him or her.

Nietzsche, like Marx, and unlike later Existentialists, de Beauvoir, for example, recognizes that the individual inherits particular things from the historical moment of the culture (e.g., the prevailing ideas and, particularly, the language and ruling metaphors). Thus, for Nietzsche the individual is not totally free of all context. However, the appropriate response to this is not, as in Marx, the development of class consciousness, a solidarity with other citizens and an imperative to help history along by committing one to the class war alongside other proletarians, but in the best and brightest spirits, a call for a heightened sense of an individuality, of one's radical separation from the herd, of one's final responsibility to one's own most fecund creativity.

Because Nietzsche and the earlier Romantics are not simply saying, we should do what we like is vital. They all have a sense that - creation of the sort they recommend requires immense spiritual and emotional discipline - the discipline of the artist shaping his most important original creation following the stringent demands of his creative imagination. These demands may not be rational, but they are not permissively relativistic in that 1960's sense ("If it feels good, do it"). Permissiveness may have often been attributed to this Romantic tradition, a sort of 1960's “shop til you drop" ethic, but that is not what any of them had in mind. For Nietzsche that would simply be a herd response to a popularized and bastardized version of a much higher call to a solitary life lived with the most intense but personal joy, suffering, insight, courage, and imaginative discipline.

This aspect of Nietzsche's thought represents the fullest nineteenth-century European affirmation of a Romantic vision of the as radically individualistic (at the opposite end of the spectrum from Marx's views of the social and economically determined). A profound and lasting effect in the twentieth century as we become ever more uncertain about coherent social identities and thus increasingly inclined to look for some personal way to take full charge of our own identities without answering to anyone but ourselves.

Much of the energy and much of the humour in Nietzsche's prose comes from the urgency with which he sees such creative - affirmation as essential if the human species is not going to continue to degenerate. For Nietzsche, human beings are, primarily, biological creatures with certain instinctual drives. The best forms of humanity are those of whom most excellently express the most important of these biological drives, the "will to power," by which he means the individual will to assume of one and create what he or she needs, to live most fully. Such a "will to power" is beyond morality, because it does not answer to anyone's system of what makes up good and bad conduct. The best and strongest human beings are those of whom create a better quality in values for themselves, live by them, and refuse to acknowledge their common links with anyone else, other than other strong people who do the same and are thus their peers.

His surveys of world history have convinced Nietzsche that the development of systems has turned this basic human drive against human beings of morality favouring the weak, the suffering, the sick, the criminal, and the incompetent (all of whom he lumps together in that famous phrase "the herd"). He salutes the genius of those who could accomplish this feat (especially the Jews and Christians), which he sees as the revenge of the slaves against their natural masters. From this century - long acts of revenge, human beings are now filled with feelings of guilt, inadequacy, jealousy, and mediocrity, a condition alleviated, if at all, by dreams of being helpful to others and of an ever-expanding democracy, an agenda powerfully served by modern science (which serves to bring everything and everyone down to the same level). Fortunately, however, this ordeal has trained our minds splendidly, so that the best and brightest (the new philosophers, the free spirits) can move beyond the traditional boundaries of morality, that is, "beyond good and evil" (his favourite metaphor for this condition is the tensely arched bow ready to shoot off an arrow).

Stressing it is important, which upon Nietzsche does not believe that becoming such a "philosopher of the future" is easy or for everyone. It is, by contrast, an extraordinarily demanding call, and those few capable of responding to it might have to live solitary lives without recognition of any sort. He is demanding an intense spiritual and intellectual discipline that will enable the new spirit to move into territory no philosopher has ever roamed before, a displacing medium where there are no comfortable moral resting places and where the individual will probably (almost unquestionably) has to pursue of a profoundly lonely and perhaps dangerous existence (so the importance of another favourite metaphor of his, the mask). Nevertheless, this is the only way we can counter the increasing degeneration of European man into a practical, democratic, technocratic, altruistic herd animal.

By way of a further introduction to Nietzsche's Beyond Good and Evil, it would only offer an extended analogy, Still, to extend some remarks into directions that have not yet been explored.

Before placing the analogy on the table, however, In wish to issue a caveat. Analogies may really help to clarify, but they can also influence us by some unduly persuasive influences of misleading proportions. In hope that the analogy In offer will provide such clarity, but not at the price of oversimplifying. So, as you listen to this analogy, you need to address the questions: To what extent does this analogy not hold? To what extent does it reduce the complexity of what Nietzsche is saying into a simpler form?

The analogy put to put on the table is the comparison of human culture to a huge recreational complex in which several different games are going on. Outside people are playing soccer on one field, rugby on another, American football on another, and Australian football on another, and so on. In the club house different groups of people are playing chess, dominoes, poker, and so on. There are coaches, spectators, trainers, and managers involved in each game. Surrounding the recreation complex is wilderness.

These games we might use to characterize different cultural groups: French Catholics, German Protestants, scientists, Enlightenment rationalists, European socialists, liberal humanitarians, American democrats, free thinkers, or what possesses you. The variety represents the rich diversity of intellectual, ethnic, political, and other activities.

The situation is not static of course. Some games have far fewer players and fans, and the popularity is shrinking; Some are gaining popularity rapidly and increasingly taking over parts of the territory available. Thus, the traditional sport of Aboriginal lacrosse is but a small remnant of what it was before contact. However, the Democratic capitalist game of baseball is growing exponentially, as is the materialistic science game of archery. They might combine their efforts to create a new game or merge their leagues.

When Nietzsche looks at Europe historically, what he sees is that different games have been going on like this for centuries. He further sees that many participants in anyone game has been aggressively convinced that their game is the "true" game, which it corresponds with the essence of games or is a close match to the wider game they imagine going on in the natural world, in the wilderness beyond the playing fields. So they have spent much time producing their rule books and coaches' manuals and making claims about how the principles of their game copy or reveal or approximate the laws of nature. This has promoted and still promotes a good deal of bad feeling and fierce arguments. Therefore, in addition anyone game it, within the group pursuing it there has always been all sorts of sub-games debating the nature of the activity, refining the rules, arguing over the correct version of the rule book or about how to educate the referees and coaches, and so on.

Nietzsche's first goal is to attack this dogmatic claim about the truth of the rules of any particular game. He does this, in part, by appealing to the tradition of historical scholarship that shows that these games are not eternally true, but have a history. Rugby began when a soccer player broke the rules and picked up the ball and ran with it. American football developed out of rugby and has changed and is still changing. Basketball had a precise origin that can be historically found.

Rule books are written in languages that have a history by people with a deep psychological point to prove: The games are an unconscious expression of the particular desires of inventive game’s people at a very particular historical moment; these rule writers are called Plato, Augustine, Socrates, Kant, Schopenhauer, Descartes, Galileo, and so on. For various reasons they believe, or claim to believe, that the rules they come up with reveals something about the world beyond the playing field and are therefore "true" in a way that other rule books are not; they have, as it was, privileged access to reality and thus record, to use a favourite metaphor of Nietzsche's, the text of the wilderness.

In attacking such claims, Nietzsche points out, the wilderness bears no relationship at all to any human invention like a rule book; He points out that nature is "wasteful beyond measure, without purposes and consideration, without mercy and justice, fertile and desolate and uncertain simultaneously: Imagine malaise of its power - how could you live according to this indifference. Living-is that not precisely wanting to be other than this nature.” Because there is no connection with what nature truly is, such rule books are mere "foreground" pictures, fictions dreamed up, reinforced, altered, and discarded for contingent historical reasons. Moreover, the rule manuals often bear a suspicious resemblance to the rules of grammar of a culture, thus, for example, the notion of an ego as a thinking subject, Nietzsche points out, is closely tied to the rules of European languages that insist on a subject and verb construction as an essential part of any statement.

So how do we know what we have is the truth? Why do we want the truth, anyway? People seem to need to believe that their games are true, but why? Might they not be better if they accepted that their games were false, were fictions, deal with the reality of nature beyond the recreational complex? If they understood the fact that everything they believe in has a history and that, as he says in the Genealogy of Morals, "only that which has no history can be defined," they would understand that all this proud history of searching for the truth is something quite different from what philosophers who have written rule books proclaim.

Furthermore these historical changes and developments occur accidentally, for contingent reasons, and have nothing to do with the games, or anyone game, shaping it according to any ultimate game or any given rule book of games given by the wilderness, which is indifferent to what is going on. There is no basis for the belief that, if we look at the history of the development of these games, we discover some progressive evolution of games toward some higher type. We may be able, like Darwin, to trace historical genealogies, to construct a narrative, but that narrative does not reveal any clear direction or any final goal or any progressive development. The genealogy of games suggests that history be a record of contingent change. The assertion that there is such a thing as progress is simply another game, another rule added by inventive minds (who need to believe in progress); it bears no relationship to nature beyond the sports complex.

While one is playing on a team, one follows the rules and thus has a sense of what form right and wrong or good and evil conduct in the game. All those carrying out the same endeavour share this awareness. To pick up the ball in soccer is evil (unless you are the goalie), and to punt the ball while running in American football is permissible but stupid; in Australian football both actions are essential and right. In other words, different cultural communities have different standards of right and wrong conduct. The artificial inventions have determined these called rule books, one for each game. These rule books have developed the rules historically; Thus, they have no permanent status and no claim to privileged access.

Now, at this point you might be thinking about the other occasion in which of Aristotle's Ethics, acknowledges that different political systems have different rules of conduct. Still, Aristotle believes that an examination of different political communities will enable one to derive certain principles common to them all, bottom-up generalizations that will then provide the basis for reliable rational judgment on which game is being played better, on what was good play in any particular game, on whether or not a particular game is being conducted well or not.

In other words, Aristotle maintains that there is a way of discovering and appealing to some authority outside any particular game to adjudicate moral and knowledge claims that arise in particular games or in conflicts between different games. Plato, of course, also believed in the existence of such a standard, but proposed a different route to discovering it.

Now Nietzsche emphatically denies this possibility. Anyone who tries to do what Aristotle recommends is simply inventing another game (we can call it Super-sport) and is not discovering anything true about the real nature of games because they do not organize reality (that has the wilderness surrounding us) as a game. In fact, he argues, that we have created this recreational complex and all the activities that go on in it to protect themselves from nature (which is indifferent to what we do with our lives), not to copy some recreational rule book that wilderness reveals. Human culture exists as an affirmation of our opposition or to contrast with nature, not as an extension of rules that include both human culture and nature. That is why falsehoods about nature might be a lot more useful than truths, if they enable us to live more fully human lives.

If we think of the wilderness as a text about reality, as the truth about nature, then, Nietzsche claims, we have no access at all to that text. What we do have accessed to conflicting interpretations, none of them based on privileged access to a "true" text. Thus, the soccer players may think them and their game is superior to rugby and the rugby players, because soccer more closely represents the surrounding wilderness, but such statements about better and worse are irrelevant. There is nothing a rule bound outside the games themselves. Therefore, all dogmatic claims about the truth of all games or any particular game are false.

Now, how did this situation come about? Well, there was a time when all Europeans played almost the same game and had done so for many years. Having little-to-no historical knowledge and sharing the same head coach in the Vatican and the same rule book, they believed that the game was the only one possible and had been around for ever. So they naturally believed that their game was true. They shored up that belief with appeals to scripture or to eternal forms, or universal principles or to rationality or science or whatever. There were many quarrels about the nature of ultimate truth, that is, about just how one should tinker with the rule book, about what provided access to God's rules, but there was agreement that such excess must exist.

Take, for example, the offside rule in soccer. Without that the game could not continue in its traditional way. Therefore, soccer players see the offside rule as an essential part of their reality, and since soccer is the only game in town and we have no idea of its history (which might, for example, tell us about the invention of the off-side rule), then the offside rule is easy to interpret as a universal, a requirement for social activity, and we will find and endorse scriptural texts that reinforce that belief. Our scientists will devote their time to linking the offside rule with the mysterious rumblings that come from the forest. From this, one might be led to conclude that the offside rule is a Law of Nature, something that extends far beyond the realms of our particular game into all possible games and, beyond those, into the realm of the wilderness it.

Of course, there were powerful social and political forces (the coach and trainers and owners of the team) who made sure that people had lots of reasons for believing in the unchanging verity of present arrangements. So it is not surprising that we find plenty of learned books, training manuals, and locker room exhortations urging everyone to remember the offside rule and to castigate as "bad" those who routinely forget that part of the game. We will also worship those who died in defence of the offside rule. Naturally any new game that did not recognize the offside rule would be a bad game, an immoral way to conduct one. So if some group tried to start a game with a different offside rule, that group would be attacked because they had violated a rule of nature and were thus immoral.

However, for contingent historical reasons, Nietzsche argues, that situation of one game in town did not last. The recreational unity of the area divides the developments in historical scholarships into past demonstrations, in that all too clearly there is an overwhelming amount of evidence that all the various attempts to show that one specific game was exempted over any of all other true games, as they are false, dogmatic, trivial, deceiving, and so on.

For science has revealed that the notion of a necessary connection between the rules of any game and the wider purposes of the wilderness is simply an ungrounded assertion. There is no way in which we can make the connections between the historically derived fictions in the rule book and the mysterious and ultimately unknowable directions of irrational nature. To conform of science, we have to believe in causes and effects, but there is no way we can prove that this is a true belief and there is a danger for us if we simply ignore that fact. Therefore, we cannot prove a link between the game and anything outside it. History has shown us, just as Darwin's natural history has proved, that all apparently eternal issues have a story, a line of development, a genealogy. Thus, notions, like species, have no reality-they are temporary fiction imposed for the sake of defending a particular arrangement.

So, God is dead. There is no eternal truth anymore, no rule book in the sky, no ultimate referee or international Olympic committee chair. Nietzsche did not kill God; History and the new science did. Nietzsche is only the most passionate and irritating messenger, announcing over the intercom system to anyone who will listen that an appeal to a system can defend someone like Kant or Descartes or Newton who thinks that what he or she is doing grounded in the truth of nature has simply been mistaken.

This insight is obvious to Nietzsche, and he is troubled that no one is worried about it or even to have noticed it. So he has moved to call the matter to our attention as stridently as possible, because he thinks that this realization requires a fundamental shift in how we live our lives.

For Nietzsche Europe is in crisis. It has a growing power to make life comfortable and an enormous energy. However, people seem to want to channel that energy into arguing about what amounts to competing fictions and to force everyone to follow particular fictions.

Why is this insight so worrying? Well, one point is that dogmatists get aggressive. Soccer players and rugby players who forget what Nietzsche is pointing out can start killing each other over questions that admit of no answer, namely, question about which group has the true game, which ordering has a privileged accountability to the truth. Nietzsche senses that dogmatism is going to lead to warfare, and he predicts that the twentieth century will see an unparalleled extension of warfare in the name of competing dogmatic truths. Part of his project is to wake up the people who are intelligent enough to respond to what he is talking about so that they can recognize the stupidity of killing each other for an illusion that they misunderstand for some "truth."

Besides that, Nietzsche, like Mill (although, in a very different way), is seriously concerned about the possibilities for human excellence in a culture where the herd mentality is taking over, where Europe is developing into competing herds - a situation that is either sweeping up the best and the brightest or stifling them entirely. Nietzsche, like Mill and the ancient pre-

Socratic Greeks to whom he constantly refers, is an elitist. He wants the potential for individual human excellence to be liberated from the harnesses of conformity and group competition and conventional morality. Otherwise, human beings are going to become destructive, lazy, conforming herd animals, using technology to divert them from the greatest joys in life, which come only from individual striving and creativity, activities that require one to release one's instincts without keeping them eternally subjugated to controlling historical consciousness or a conventional morality of good and evil.

What makes this particularly a problem for Nietzsche is that he sees that a certain form of game is gaining popularity: Democratic volleyball. In this game, the rule book insists that all players be treated equally, that there be no natural authority given to the best players or to those who understand the nature of quality play. Therefore the mass of inferior players is taking over, the quality of the play is deteriorating, and there are fewer and fewer good volleyball players. This process is being encouraged both by the traditional ethic of "help your neighbour," now often in a socialist uniform and by modern science. As the mass of more many inferior players takes over the sport, the mindless violence of their desires to attack other players and take over their games increases, as does their hostility to those who are uniquely excellent (who may need a mask to prevent themselves being recognized).

The hopes for any change in this development are not good. In fact, things might be getting worse. For when Nietzsche looks at all these games going on he notices certain groups of people, and the prospect is not totally reassuring.

First there remain the overwhelming majority of people: the players and the spectators, those caught up in their particular sport. These people are, for the most part, continuing as before without reflecting or caring about what they do. They may be vaguely troubled about rumours they hear that their game is not the best, they may be bored with the endless repetition in the schedule, and they have essentially reconciled them that they are not the only game going on, but they had rather not thought about it. Or else, stupidly confident that what they are doing is what really matters about human life, is true, they preoccupy themselves with tinkering with the rules, using the new technology to get better balls, more comfortable seats, louder whistles, more brightly painted side lines, more trendy uniforms, tastier Gatorade - all in the name of progress.

Increasing numbers of people are moving into the stands or participating through the newspaper or the television sets. Most people are thus, in increasing numbers, losing touch with themselves and their potential as instinctual human beings. They are the herd, the last men, preoccupied with the trivial, unreflectingly conformist because they think, to the extent they think at all, that what they do will bring them something called "happiness." Yet they are not happy: They are in a permanent state of narcotized anxiety, seeking new ways to entertain themselves with the steady stream of marketed distractions that the forces of the market produce: Technological toys, popular entertainment, college education, Wagner's operas, academic jargon.

This group, of course, includes all the experts in the game, the cheerleaders whose job it is to keep us focussed on the seriousness of the activity, the sports commentators and pundits, whose life is bound up with interpreting, reporting, and classifying players and contests. These sportscasters are, in effect, the academics and government experts, the John Maddens and Larry Kings and Mike Wallaces of society, those demigods of the herd, whose authority derives from the false notion that what they are dealing with is something other than a social-fiction.

There is a second group of people, who have accepted the ultimate meaninglessness of the game in which they were. They have moved to the sidelines, not as spectators or fans, but as critics, as cynics or nihilists, dismissing out of hand all the pretensions of the players and fans, but not affirming anything themselves. These are the souls who, having nothing to will (because they have seen through the fiction of the game and have therefore no motive to play any more), prefer to will nothing in a state of paralysed skepticism. Nietzsche has a certain admiration for these people, but maintains that a life like this, the nihilist on the sidelines, is not a human life.

For, Nietzsche insists, to live as a human being, is to play a game. Only in playing a game can one affirm one's identity, can one create values, can one truly exist. Games are the expression of our instinctual human energies, our living drives, what Nietzsche calls our "will to power." So the nihilistic stance, though understandable and, in a sense, courageous, is sterile. For we are born to play, and if we do not, then we are not fulfilling a worthy human function. Also, we have to recognize that all games are equally fictions, invented human constructions without any connections to the reality of things.

So we arrive at the position of the need to affirm a belief (invent a rule book) which we know to have been invented, to be divorced from the truth of things. To play the best game is to live by rules that we invent for ourselves as an assertion of our instinctual drives and to accept that the rules are fictions: they matter, we accept them as binding, we judge ourselves and others by them, and yet we know they are artificial. Just as in real life a normal soccer player derives a sense of meaning during the game, affirms his or her value in the game, without ever once believing that the rules of soccer have organized the universe or that those rules have any universal validity, so we must commit ourselves to epistemological and moral rules that enable us to live our lives as players, while simultaneously recognizing that these rules have no universal validity.

The nihilists have discovered half this insight, but, because they cannot live the full awareness, they are very limited human beings.

The third group of people, that small minority that includes Nietzsche himself, who of which are those who accept the game’s metaphor, see the fictive nature of all systems of knowledge and morality, and accept the challenge that to be most fully human is to create a new game, to live a life governed by rules imposed by the dictates of one's own creative nature. To base one's life on the creative tensions of the artist engaged with creating a game that meets most eloquently and uncompromisingly the demand of one's own irrational nature-one's wish-is to be most fully free, most fully human.

This call to live the -created life, affirming one in a game of one's own devising, necessarily condemns the highest spirits to loneliness, doubt, insecurity, emotional suffering, because most people will mock the new game or be actively hostile to it or refuse to notice it, and so on; Alternatively, they will accept the challenge but misinterpret what it means and settle for some marketed easy game, like floating down the Mississippi smoking a pipe. Nevertheless, a generated game also brings with-it the most intense joy, the most playful and creative affirmation of what is most important in our human nature.

Noting here that one’s freedom to create is important one's own game is limited. In that sense, Nietzsche is no existentialist maintaining that we have a duty and an unlimited freedom to be whatever we want to be. For the resources at our disposable parts of the field still available and the recreational material lying around in the club house-are determined by the present state of our culture. Furthermore, the rules In devise and the language In frame them in will ordinarily owe a good deal to the present state of the rules of other games and the state of the language in which those are expressed. Although in changing the rules of my game, let it be known that my starting point, or the rules have the availability to change, and are given to me by my moment in history. So in moving forward, in creating something that will transcend the past, In am using the materials of the past. Existing games are the materials out of which In fashion my new game.

Thus, the new philosopher will transcend the limitations of the existing games and will extend the catalogue of games with the invention of new ones, but that new creative spirit faces certain historical limitations. If this is relativistic, it is not totally so.

The value of this endeavour is not to be measured by what other people think of the newly created game; Nor does its value lie in fame, material rewards, or service to the group. Its value comes from the way it enables the individual to manifest certain human qualities, especially the will to power. Nonetheless, it seems that whether or not the game attracts other people and becomes a permanent fixture on the sporting calendar, something later citizens can derive enjoyment from or even remember, that is irrelevant. For only the accidents of history determination of whether the game invented is for my-own attractions in other people, that is, becomes a source of value for them.

Nietzsche claims that the time is right for such a radically individualistic endeavour to create new games, new metaphors for my life. For, wrongheaded as many traditional games may have been, like Plato's metaphysical soccer or Kant's version of eight balls, or Marx's materialist chess tournament, or Christianity's stoical snakes and ladders, they have splendidly trained us for the much more difficult work of creating values in a spirit of radical uncertainty. The exertions have trained our imaginations and intelligence in useful ways. So, although those dogmatists were unsound, an immersion in their systems has done much to refine those capacities we most need to rise above the nihilists and the herd.

Now, putting its analogy on the table for our consecrations to consider and clarify by some central points about Nietzsche. However, the metaphor is not so arbitrary as it may appear, because this very notion of systems of meanings as invented games is a central metaphor of the twentieth century thought and those who insist upon it as often as not point to Nietzsche as their authority.

So, for example, when certain postmodernists insist that the major reason for engaging in artistic creativity or literary criticism or any form of cultural life be to awaken the spirit of creative play that is far more central than any traditional sense of meaning or rationality or even coherence, we can see the spirit of Nietzsche at work.

Earlier in this century, as we will see in the discussions of early modern art, a central concern was the possibility of recovering some sense of meaning or of recreating or discovering a sense of "truth" of the sort we had in earlier centuries. Marxists were determined to assist history in producing the true meaning toward which we were inexorably heading. To the extent that we can characterize post-modernism simply at all, we might say that it marks a turning away from such responses to the modern condition and an embrace, for better or worse, of Nietzsche, joyful -affirmation in a spirit of the irrationality of the world and the fictive qualities of all that we create to deal with life.

One group we can quickly identify is those who have embraced Nietzsche's critique, who appeal to his writing to endorse their view that the search to ground our knowledge and moral claims in Truth are futile, and that we must therefore recognize the imperative Nietzsche laid before us to -create our own lives, to come up with new -descriptions affirming the irrational basis of our individual humanity. This position has been loosely termed Antifoundationalism. Two of its most prominent and popular spokespersons in recent years have been Richard Rorty and Camille Paglia. Within Humanities departments the Deconstructionists (with Derrida as their guru) head the Nietzsche an charge.

Antifoundationalists supportively link Nietzsche closely with Kuhn and with Dewey (whose essay on Darwin we read) and sometimes with Wittgenstein and take central aim at anyone who would claim that some form of enquiry, like science, rational ethics, Marxism, or traditional religion has any form of privileged access to reality or the truth. The political stance of the Antifoundationalists tends to be radically romantic or pragmatic. Since we cannot ground our faith in any public morality or political creed, politics becomes something far less important than personal development or else we have to conduct our political life simply on a pragmatic basis, following the rules we can agree on, without according those rules any universal status or grounding in eternal principles. If mechanistic science is something we find, for accidental reasons of history, something useful, then we will believe it for now. Thus, Galileo's system became adopted, not because it was true or closer to the truth that what it replaced, but simply because the vocabulary he introduced inside our descriptions was something we found agreeable and practically helpful. When it ceases to fulfill our pragmatic requirements, we will gradually change to another vocabulary, another metaphor, another version of a game. History shows that such a change will occur, but how and when it will take place or what the new vocabulary might be-these questions will be determined by the accidents of history.

Similarly, human rights are important, not because there is any rational non-circular proof that we ought to act according to these principles, but simply because we have agreed, for accidental historical reasons, that these principles are useful. Such pragmatic agreements are all we have for public life, because, as Nietzsche insists, we cannot justify any moral claims by appeals to the truth. So we can agree about a schedule for the various games and distributing the budget between them and we can, as a matter of convenience, set certain rules for our discussions, but only as a practical requirement of our historical situation, least of mention, not by any divine or rationality that of any system contributes of its distributive cause.

A second response is to reject the Antifoundationalist and Nietzsche an claim that no language has privileged contact to the reality of things, to assert, that is, that Nietzsche is wrong in his critique of the Enlightenment. Plato's project is not dead, as Nietzsche claimed, but alive and well, especially in the scientific enterprise. We are discovering ever more about the nature of reality. There may still be a long way to go, and nature might be turning out to be much more complex than the early theories suggested, but we are making progress. By improving the rule book we will modify our games so that they more closely approximate the truth of the wilderness.

To many scientists, for example, the Antifoundationalist position is either irrelevant or just plain wrong, an indication that social scientists and humanity’s types do not understand the nature of science or are suffering a bad attack of sour grapes because of the prestige the scientific disciplines enjoy in the academy. The failure of the social scientists (after generations of trying) to come up with anything approaching a reliable law (like, say, Newton's laws of motion) has shown the pseudoscientific basis of the disciplines, and unmasks their turn to Nietzsche an Antifoundationalism as a feeble attempt to justify their presence in the modern research university.

Similarly, Marxists would reject Antifoundationalism as a remnant of aristocratic bourgeois capitalism, an ideology designed to take intellectuals' minds off the realities of history, the truth of things. There is a truth grounded in a materialist view of history, fostering, that only in diverting philosophers away from social injustice. No wonder the most ardent Nietzscheans in the university have no trouble getting support from the big corporate interests to and their bureaucratic subordinates: The Ford Foundation, and the National Endowment for the Humanities. Within the universities and many humanities and legal journals, some liveliest debates go on between the Antifoundationalists allied and the Deconstructionists under the banner of Nietzsche and the historical materialists and many feminists under the banner of Marx.

Meanwhile, there has been a revival of interest in Aristotle. The neo-Aristotelians agree with Nietzsche's critique of the Enlightenment rational project-that we are never going to be able to derive a sense of human purpose from scientific reason - but assert that sources of value and knowledge are not simply a contingent but arise from communities and that what we need to sort out our moral confusion is a reassertion of Aristotle's emphasis on human beings, not as radically individual with an identity before their political and social environment, but moderate political animals, whose purpose and value are deeply and essentially rooted in their community. A leading representative for this position is Alisdair McIntyre.

Opposing such a communitarian emphasis, a good deal of the modern Liberal tradition points out that such a revival of traditions simply will not work. The break down of the traditional communities and the widespread perception of the endemic injustice of inherited ways is something that cannot be reversed (appeals to Hobbes here are common). So we need to place our faith in the rational liberal Enlightenment tradition, and look for universal rational principles, human rights, rules of international morality, justice based on an analysis of the social contract, and so on. An important recent example such a view is Rawls' famous book Social Justice.

Finally, there are those who again agree with Nietzsche's analysis of the Enlightenment and thus reject the optimistic hopes of rational progress, but who deny Nietzsche's proffered solution. To see life as irrational chaos that we must embrace and such joyous affirmation as the value-generating activity in our human lives, while recognizing its ultimate meaninglessness to the individual, too many people seem like a prescription for insanity. What we, as human beings, must have to live a fulfilled human life is an image of eternal meaning. This we can derive only from religion, which provides for us, as it always has, a transcendent sense of order, something that answers to our essential human nature far more deeply than either the Enlightenment faith in scientific rationality or Nietzsche's call to a life of constantly metaphorical -definition.

To read the modern debates over literary interpretation, legal theory, human rights issues, education curriculums, feminist issues, ethnic rights, communitarian politics, or a host of other similar issues is to come repeatedly across the clash of these different positions (and others). To use the analogy In started with, activities on the playing fields are going on more energetically than ever. Right in the middle of most of these debates and generously scattered throughout the footnotes and bibliographies, Nietzsche's writings are alive and well. To that extent, his ideas are still something to be reckoned with. He may have started by shouting over the loud speaker system, in a way no to which one bothered attending; now on many playing fields, the participants and fans are considering and reacting to his analysis of their activities. So Nietzsche today is, probably more than ever before in this century, right in the centre of some vital debates over cultural questions.

You may recall how, in Book X of the Republic, Plato talks about the "ancient war between poetry and philosophy." What this seems to mean from the argument is an ongoing antagonism between different uses of language, between language that seeks above all, denotative clarity the language of exact definitions and precise logical relationships and language whose major quality is its ambiguous emotional richness, between, that is, the language of geometry and the language of poetry (or, simply put, between Euclid and Homer)

Another way of characterizing this dichotomy is to describe it as the intensive force between a language appropriates and discovering the truth and one appropriate to creating it, between, that is, a language that sets it up as an exact description of a given order (or as exactly presently available) and a language that sets it up as an ambiguous poetic vision of or an analogy to a natural or cosmic order.

Plato, in much of what we studied, seems clearly committed to a language of the former sort. Central to his course of studies that will produce guardian rulers is mathematics, which is based upon the most exact denotative language we know. Therefore, the famous inscription over the door of the Academy: "Let no one enter here who has not studied geometry." Underlying Plato's remarkable suspicion of a great deal of poetry, and particularly of Homer, is this attitude to language: Poetic language is suspect because, being based on metaphors (figurative comparisons or word pictures), it is a third remove from the truth. In addition, it speaks too strongly to the emotions and thus may unbalance the often tense equilibrium needed to keep the soul in a healthy state.

One needs to remember, however, that Plato's attitude to language is very ambiguous, because, in spite of his obvious endorsement of the language of philosophy and mathematics, in his own style he is often a poet, a creator of metaphor. In other words, there is a conflict between his strictures on metaphor and his adoption of so many metaphors (the central one of some dramatic dialogues is only the most obvious). Many famous and influential passages from the Republic, for example, are not arguments but poetic images or fictional narratives: The Allegory of the Cave, the image of the Sun, the Myth of Er.

Plato, in fact, has always struck me as someone who was deeply suspicious about poetry and metaphor because he responded to it so strongly. Underlying his sometimes harsh treatment of Homer may be the imagination of someone who is all too responsive to it (conversely, and Aristotle’s more lenient view of poetry may stem from the fact that he did not really feel its effects so strongly). If we were inclined to adopt Nietzsche's interpretation of philosophy, we might be tempted to see in Plato's treatment of Homer and his stress on the dangers of poetic language his own "confession" of weakness. His work is, in part, an attempt to fight his own strong inclinations to prefer metaphoric language.

Nietzsche is unchallenged as the most sightful and powerful critics of the moral climate of the 19th century (and of what remains in ours). His exploration of bringing forth an acknowledged unconscious motivation, and the conflict of opposing forces within the mindful purposes of possibilities of creative integration. . . . Freud and Nietzsche elaborate upon the whole field of libidinal economy: The transit of the libido through other selves, aggression, infliction and reception of pain, and something very much like death, the total evacuation of the entire quantum of excitation that the organism is charged.

Nietzsche suggests that in our concern for the other, in our sacrifice for the other, we are concerned with ourselves, one part of ourselves represented by the other. That for which we sacrifice ourselves is unconsciously related to as another part of us. In relating to the other we are in fact also relating to a part of ourselves and we are concerned with our own pleasure and pain and our own expression of will to power. In one analysis of pity, Nietzsche states that, “we are, to be sure, not consciously thinking of ourselves but are doing so strongly unconsciously.” He goes on to suggest that it be primarily our own pleasure and pain that we are concerned about and that the feelings and reactions that follow are multi-determined: “We never do anything of this kind out of one motive.”

The real world is flux and change for Nietzsche, but in his later works



there is no “unknowable true world.” Also, the splits between a surface, apparent world and an unknowable but a true world of the things-in-themselves were, as is well known, a view Nietzsche rejected. For one thing, as Mary Warnock points out, Nietzsche was attempting to get across the point that there is only one world, not two. She also suggests that for Nietzsche, if we contribute anything to the world, it be the idea of a “thing,” and in Nietzsche’s words, “the psychological origin of the belief in things forbids us to speak of things-in-themselves.”

Nietzsche holds that there is an extra-mental world to which we are related and with which we have some kind of fit. For him, even as knowledge develops in the service of -preservation and power, to be effective, a conception of reality will have a tendency to grasp (but only) a certain amount of, or aspect of, reality. However much of Nietzsche may at times (the truth of) artistic creation and dissimulation (out of chaos) as paradigmatic for science (which will not recognize it as such), in arriving art this position Nietzsche assumes the truth of scientifically based beliefs as foundation for many of his arguments, including those regarding the origin, development and nature of perception, consciousness and consciousness and what this entails for our knowledge of and falsification of the external and inner world. In fact, to some extent the form-providing, affirmative, this-world healing of art is a response to the terrifying, nausea-inducing truths revealed by science that by it had no treatment for the underlying cause of the nausea. Although Nietzsche also writes of the horrifying existential truths, against which science can attempt a [falsifying] defence. Nevertheless, while there is a real world to which we are affiliated, there is no sensible way to speak of a nature or constitution or eternal essence of the world in and of it apart from description and perceptive. Also, states of affairs to which our interpretations are to fit are established within human perspectives and reflect (but not only) our interests, concerns, needs for calculability. While such relations (and perhaps as meta-commentary on the grounds of our knowing) Nietzsche is quite willing to write of the truth, the constitution of reality, and facts of the case. There appears of no restricted will to power, nor the privilege of absolute truth. To expect a pure desire for a pure truth is to expect an impossible desire for an illusory ideal.

The inarticulate come to rule supreme in oblivion, either in the individual’s forgetfulness or in those long stretches of the collective past that have never been and will never be called forth into the necessarily incomplete articulations of history, the record of human existence that is profusely interspersed with dark passages. This accounts for the continuous questing of archeology, palaeontology, anthropology, geology, and accounts, too, for Nietzsche’s warning against the “insomnia” of historicisms. As for the individual, the same drive is behind the modern fascination with the unconscious and, thus, with dreams, and it was Nietzsche who, before Freud, spoke of forgetting as an activity of the mind. At the beginning of his, Genealogy of Morals, he claims, in defiance of all psychological “shallowness,” that the lacunae of memory are not merely “passive” but the outcome of an active and positive “screening,” preventing us from remembering what would upset our equilibrium. Nietzsche is the first discoverer of successful “repression,” the burying of potential experience in the unarticulated that is, as moderately when the enemy territory is for him.

Still, he is notorious for stressing the ‘will to power’ that is the basis of human nature, the ‘resentment’ that comes once it is denied of its basis in action, and the corruptions of human nature encouraged by religions, such as Christianity, that feed on such resentment. Yet the powerful human being who escapes all this, the ‘Übermensch’, is not the ‘blood beast’ of later fascism: It is a human being who has mastered passion, risen above the senseless flux, and given creative style of his or her character. Nietzsche’s free spirits recognize themselves by their joyful attitude to eternal return. He frequently presents the creative artist than the world warlord as his best exemplar of the type, but the disquieting fact remains that he seems to leave him no words to condemn any uncaged beast of prey who vests finds their style by exerting repulsive power over others. Nietzsche’s frequently expressed misogyny does not help this problem, although in such matters the interpretation of his many-layered and ironic writing is not always straightforward. Similarly, such anti-Semitism, as found in his work is in an equally balanced way as intensified denouncements of anti-Semitism, and an equal or greater contempt of the German character of his time.

Nietzsche’s current influence derives not only from his celebration of the will, but more deeply from his scepticism about the notions of truth and fact. In particular, he anticipated many central tenets of postmodernism: An aesthetic attitude toward the world that sees it as a ‘text’, the denial of facts: The denial of essences, the celebration of the plurality of interpretations and of the fragmented and political discourse all for which are waiting their rediscovery in the late 20th century. Nietzsche also has the incomparable advantage over his followers of being a wonderful stylist, and his perspectives are echoed in the shifting array of literary devices - humour, irony, exaggeration, aphorisms, verse, dialogue, parody with which he explores human life and history.

All the same, Nietzsche is openly pessimistic about the possibility of knowledge: ‘We simply lack any organ for knowledge, for ‘truth’: We ‘know’ (or believe or imagine) just as much as may be useful in the interests of the human herd, the species, and perhaps precisely that most calamitous stupidity of which we shall perish some day’ (The Gay Science).

This position is very radical for Nietzsche does not simply deny that knowledge, construed as the adequate representation of the world by the intellect, exists. He also refuses the pragmatist identification: He writes that we think truth with usefulness: he writes that we think we know what we think is useful, and that we can be quit e wrong about the latter.

Nietzsche’s view, his ‘perspectivism’, depends on his claim that there is no sensible conception of a world independent of human interpretation and to which interpretations would correspond if they were to make up knowledge. He sums up this highly controversial position in The Will to Power: ‘Facts and precisely what there is not, only interpretations’.

It is often maintained that the affirmation within perspectivism is -undermined, in that if the thesis that all views are interpretations is true then, it is argued for, that a compound view is not an interpretation. If, on the other hand, the thesis is it an interpretation, perhaps, on that point is no reason to believe that it is true, and it follows again that not every view is an interpretation.

Nonetheless, this refutation assumes that if a view, as perspectivism it, is an interpretation, it is by that very fact wrong. This is not so, however, an interpretation is to say that it can be wrong, which is true of all views, and that is not a sufficient refutation. To show the perspectivism is really false in producing another view superior to it that on specific epistemological grounds it is necessary.

Perspectivism does not deny that particular views can be true. Like some versions of contemporary anti-realism, it attributes to specific approaches’ truth in relation to facts themselves. Still, it refused to envisage a single independent set of facts, and accounted for by all theories. Thus, Nietzsche grants the truth of specific scientific theories: He does, however, deny that a scientific interpretation can possibly be ‘the only justifiable interpretation of the world’ (The Gay Science): The fact’s have to neither be addressed through science nor are the methods that serve the purposes for which they have been devise: Regardless, these have no priority over the many others’ purposes of human life.

The existence of many purposes and needs for which the measure of theoretic possibilities is established -other crucial elements evolving perspectivism is sometimes thought to imply of a prevailing-over upon relativism, according to which no standards for evaluating purposes and theories can be devised. This is correct only in that Nietzsche denies the existence of a single set of standards for determining epistemic value. However, he holds that specific views can be compared with and evaluated in relation to one another. The ability to use criteria acceptable in particular circumstances does not presuppose the existence of criteria applicable in it. Agreement is therefore not always possible, since individuals may sometimes differ over the most fundamental issue dividing them.

Nonetheless, this fact would not trouble Nietzsche, which his opponents too also have to confront only, as he would argue, to suppress it by insisting on the hope that all disagreements are in principal eliminable even if our practice falls woefully short of the ideal. Nietzsche abandons that ideal, but he considers irresoluble disagreement an essential part of human life.

Nature is the most apparent display of the will to power at work. It is wholly unconscious and acts solely out of necessity, such that no morality is involved. We are a part of this frightening chaos where anything can happen anytime. However, this requires far too much intelligence for us to realize and rightly accept it totally. So we invent reasons for things that have no reason. We believe in our own falsification of nature. We produce art, and delight in the perfection that is unnatural. All of the same time, we are to dwell along within nature, and, still, nature is fooling it. Nietzsche accentuates that of all human actions are remnant fragments of those yielding of an acceptable appearance corresponded by surrendering to some part of nature, is without much difficulty accomplished out of necessity. They are instincts we have developed for our own preservation. He believes the natural state is the best state, even in all its wantonness, and he calls people to open their ears to the purity of a nature without design. ‘The universe's music box repeats eternally its tune, which can never be called for as a melody’.

The Gay Science explains the problems with man humanizing nature. It is a fitting departure point because, through criticism, it states Nietzsche's regard for the unconsciousness of the will to power in nature. He fills this section with warning: “Let us beware of thinking that the universe is a living being,” he says. “Where should it expand?” “On what should it feed?” The universe lacks its own will to power. We can in no way identify with the universe, despite all our efforts.’We should not make it something essential, universal, and eternal’. Nietzsche is dispelling the notion that there is meaning in existence. He is saying that when all is said and done and gone, our universe does not matter. After all, it will destroy and create it into eternity. It does not have a purpose, like a machine. Humans seek honour in the universe and we find honour in spite of any purposive inclusion. So tricking ourselves is easy as we have become conceited into believing that we are the purposes of the universe. All the power in the universe working toward producing our species of mammals. Yet let us be reasonable. Nietzsche calls the organic an “exception of exceptions.” Where matter it is an exception: We are not the secret aim, but a byproduct of unusual circumstances. It is an error to assume that all of the space behaves in the manner of that which immediately surrounds us. We cannot be sure of this uniformity. Nietzsche uses our surrounding stars as an example. He asserts that stars may very well exist whose orbits are not at all what we suppose? ‘Let us beware of attributing it to heartlessness and unreason and their opposites’. There is no intent, as there is no such accident, because this requires a purpose. All these things are disguises man has given the universe. They are false, but why should we beware? Nietzsche emphasizes our weakness as animals. We are the only animals that live against our natural inclinations. By suppressing our instincts, we become less and less equipped to exist as part of nature. If we continue living against our surroundings, we will be removed, not out of God's anger, but out of necessity.

Nietzsche reminds us that the total character of the world is in all eternity chaos. The only structure responsible for the necessity that reigns in nature is the will to power. The will to power begins in chaos. We find it unpleasant to think of our lives in these terms, because, stronger than our urge to deify nature is usually our urge to deify ourselves. We are merely living things. Let us beware of saying that death is opposed to life. The living is merely a type of what is dead, and a very rare type. There is no opposition, only will to power. The living and the dead are both made of the same basic materials. The difference is that when something is alive, its molecules reproduce. Again, Nietzsche focuses on the exceptions.

When will all these shadows of God cease to darken our minds? When will we complete our de-veneration of nature? When may we begin to ‘naturalize’ humanity through a pure, newly discovered, newly redeemed nature?

Nietzsche divides human beings according to their creative power. The higher, creative humans see and hear more than the lower, who concern themselves with matters of man. This is a pattern found throughout nature: the higher animals experience more. In humans, the higher become at once happier and unhappier, because they are feeling more. Nietzsche calls these people the ‘poets’ who are creating the lives on stage, while the non-creative are exasperated ‘actors’. The actors could be better understood as spectators of the poets' performance. Poets’ think and feel harmoniously, matched with time; he is able continually to fashion something that did not previously exist. He created the entire world of valuations, colours, accents, perspectives, scales, affirmations, and negations' studied by the actor. In our society, the actor is called practical, when it is the poet who is responsible for any value we place. By this, Nietzsche means that, since nothing has any meaning or value by nature, the poets are responsible because they are the ones who produce beauty. They are responsible for everything in the world that concerns man. They fail to recognize this, however, and remain unaware of their best power. We are neither as proud nor as happy as we might be.

Our poets produce art. Art is the expression of perfect beauty that does not occur naturally. Human hands have given and conceived it by human minds; it is human nature. We are separate from the rest of the animal kingdom in our deviation from nature. Our instincts led us to delight in art as it distinguished it and its creators as supernatural. We must wonder at nature becoming bored with ourselves, to create something better and, perhaps, slightly as perfect than it can become. What does nature know of perfection? It is the will to power, but a facet only exhibited in humans. All the same, in that it seems that art is meant to be as far removed from everything as naturally possible. Nietzsche uses the Greeks as an example of this pure art. They did not want fear and pity. To prevent these human emotions from interfering in the presentation of a writer's work, the Greeks would confine actors to narrow stages and restrict their facial expressions. The object was beautiful speech, with the presentation only meant to do the words justice, not to distract with dramatic interpretation. A more modern example is the opera. Nietzsche points out the insignificance of the words versus the music. What is the loss in not understanding an opera singer? In the present, art has degenerated so that its purpose is often to remind us of our humanity, much less to express that which is perfect. We listen for words that shackle us to the land in a medium that can elevate us above the rest. Art gives human life reason, purpose, and all the things we have attributed from God, but Art is true. It is the only meaning in life, because it is unnatural.

An examining consideration as arranged of human autonomy has of it the designed particularity of interests, in that for Nietzsche conveys the predisposition for which it finds the preservation of the species. It is the oldest and strongest of our instincts, and it is the essence of the herd. Why should we care about the survival of our race? It is not in our interest as individuals. Yet we cannot avoid it. Nietzsche points out that even the most harmful men aid preservation by instilling instincts in the rest of us that a necessary for our survival. In that way they are largely responsible for it, but according to Nietzsche, we are no longer capable of ‘living badly’. That is, living in a way that goes against preservation of the human race: 'Above all, perish', you will contribute to humanity.

Nietzsche reflected back to the contributions achieved through the afforded endeavours attributively generated of the seventeenth and eighteenth centuries, and, by comparison and consistence in their according attitude with their sense of nature. The seventeenth century was a time when humans lived closer to their instincts. An artist of that time would attempt to capture all that he could in art, removing him as much as possible. In the eighteenth century, Nietzsche says that artists took the focus away from nature and put it on themselves. Art became social propaganda. It became more human. We are missing ‘the hatred of the lack of a sense of nature that was present in the seventeenth century. Nietzsche writes of the nineteenth century with hope. He says people have become more concrete, more fearless. They are putting the 'health of the body ahead of the health of the soul'. It would appear that they are making a return to nature, except that if they reach it, it will be for the first time.

‘There has never yet been of a foremost advance in natural humanity. That is a humanity living according to their nature. Nietzsche stresses that we have become far too unnatural in our social and technical evolution. Humans have tried to exist above nature, condemning their own world (in fact, themselves) as if they were the only ones alive by the grace of God. This is an obvious contradiction for Nietzsche. He believes first that, since human beings share the same relationship with nature as any other animal, we ought to live according to our instincts. We ought to do what comes naturally to us that which most reflects the will to power. He uses two examples of natural human behaviour (human instinct) to clarify. The first example is the search for knowledge, which is naturally occurring in all humans, whether they are conscious of it or not. The second is the way we perceive our rights in society. This is an example of humans living according to necessity. We act according to that which can be enforced. Punishment is our only deterrent. In that respect, we live naturally, but that is in resignation that we meet a superficial nature. If we could live in accord with that which governs our fellow creatures, we would discover our true selves, and realize our full potential as artists.

Nietzsche regards the evolution of human nature as a journey from the age of morality into the age of consciousness, or the age of ridicule. He saw humanity of his time still living according to teachers of remorse, their so-called ‘heroes.’ These men inspired our faith in life, and our fear of death. They gave us false reasons for our existence, disguising it with the invention of a ‘teacher of the purpose of existence’. Nietzsche believes that the idea of God came about because it distracted people from their insignificance. What does one person count for in relation to the whole of society? Nothing, the preservation of the species is all that matter. We are mammals like any other. If by force or some orderly enforcement was to gain power and leadership for themselves, the ‘heroes’ taught us that we were significant in the eyes of God. They taught us to take ourselves seriously; that life is worth living because of this 'teacher of purpose'. We cling to this safety blanket that protects us from seeing us at eye level with the rest of the natural world because we cannot handle the true nature of the human race as a herd.

Nietzsche predicts that humans will evolve to the point where they can comprehend the true nature of their species. This realization brings ridicule to our lives. Nothing has been meaning that it is all random functioning at the hands of the will to power. We can no longer be solemn in our work, for how are we to take ourselves seriously when we have given up our blanket? It seems that we had lost our direction to the madness of nature and reason. We must remember that unreason is also essential for the preservation of the species.

In his writings (Essays on Aesthetics, Untimely Meditations the Gay Science and others) Nietzsche wishes to be considered by his readers and viewed in and by history as a psychologist who practice’s psychology who asserts attention by unitizing contingently prescribed studies as a curriculum can quickly bind and serve as for his time must be accredited of it, embracing willfully a ‘new system for psychology?’

In fact, several authors view many expressions as voiced in the aspects of Nietzsche’s work, for instance, Kaufmann and Golomb, as psychological ones, a fact disregarded by many authors who regard Nietzsche as a mere anti philosopher and a writer of short beautiful verse. Surely, while being a young, frustrated, physically and mentally ill, retired professors of Philology, who has viciously attacked his colleagues, the state, society and the establishment and wrote provocative verses and notes, Nietzsche has also sought to bring the nature of man, the unconscious, the conscious, conscious, analysis, relationships with other individuals, the inner state (emotions, sensations, feelings and the like), irrational sources of man's power and greatness with his morbidity and -destructiveness into the scope of existence.

Further, in his many writings Nietzsche also talks of the mind, the mental, instincts, reflexes, reflexive movements, the brain, symbolic representations, images, views, metaphors, language, experiences, innate and hereditary psychological elements, defence, protective, mechanism, repression, suppression, overcoming, an overall battle, struggle and conflict between individuals etc., As an illustration, Nietzsche describes how blocked instinctual powers turn within the individual into resentment, -hatred, hostility and aggression. Moreover, Nietzsche strives to analyse human being, his crisis, his despair and his existence in the world and to find means to alleviate human crises and despair.

These aspects of Nietzsche's work elicit a tendency to compare Nietzsche's doctrine with that of Freud and psychoanalysis and to argue that the Freudian doctrine and school (the psychoanalytic theory of human personality on which the psychotherapeutic technique of psychoanalysis is based). Nietzsche’s has influenced and affected methods of treatment (psychoanalysis) by Nietzsche's philosophy and work and the Nietzsche an doctrine. As a demonstration from the relevant literature, according to Golomb's (1987) thesis, the theoretical core of psychoanalysis is already part and parcel of Nietzsche's philosophy, insofar as it is based on ideas that are both displayed in it and developed by it - ideas such as the unconscious, repression, sublimation, the id, the superego, primary and secondary processes and interpretations of dreams.

Nonetheless, the actual situation in the domains of psychotherapy, psychiatry and clinical psychology are not in passing over, but collectively strict and well-set determination for each general standard aligns itself to an assailing mortality. While the two savants (Nietzsche and Freud) endeavour to understand man, to develop the healthy power that is still present in the individual and the neurotic patient to overcome and suppress the psychological boundaries that repress his vitality and inhibit his ability to function freely and creatively and attain truth, the difference between the psychodynamic school, approach, movement and method of treatment, usually psychoanalysis, in particular, and the existential approach to psychotherapy, the existential movement and the existential humanistic school of psychology and method of treatment stemmed from the doctrines and views of Freud. Nietzsche is profound and significant, for the actual psychotherapeutic treatment. The reason as for this difference lie in the variation in the two savants' view and definition of man and human existence, the nature and character of man and his relationship with the world and the environment, and in the variation in the intellectual soil, that nourished and nurtured the two giant savants' views, doctrines (that is, the scholarly academic savants' philosophical and historical roots and influences) and the manners according to which they have been devised and designed.

Before anything else, the question of Nietzsche's historical critique, as might that we will recall of how one featuring narrative has been drawn from the texts, was a rapidly developing interest in and used for the enormously powerful historical criticism developed by Enlightenment thinkers. It is a way of undermining the authority of traditional power structures and the fundamental beliefs that sustain them.

We saw, for example, how in Descartes's Discourse on Method, Descartes offers a hypothetical historical narrative to undermine the authority of the Aristotelians and a faith in an eternal unchanging natural order. Then, we discussed how in the Discourse on Inequality, based on an imaginative reconstruction of the history of human society, Rousseau, following Descartes's lead but extending it to other areas (and much more aggressively), can encourage in the mind of the reader the view that evil in life is the product of social injustice (than, say, the result of Original Sin or the lack of virtue in the lower orders). We have in addition of reading Kant, Marx, and Darwin how a historical understanding applied to particular phenomena undercut traditional notions of eternal truths enshrined in any particular beliefs (whether in species, in religious values, or in final purposes).

Nonetheless, this is a crucial point, the Enlightenment thinker, particularly Kant and Rousseau and Marx, do not allow history to undermine all sources of meaning; For them, beyond its unanswerable power to dissolve traditional authority, history holds out the promise of a new grounding for rational meaning, in the growing power of human societies to become rational, to, and in one word, progress. Thus, history, beyond revealing the inadequacies of many traditional power structures and sources of meaning, had also become the best hope and proof for firm faith in a new eternal order: The faith in progressive reform or revolution. This, too, is clearly something Wollstonecraft pins her hopes on (although, as we saw, how radical her emplacements continue as of a matter to debate).

On this point, as we also saw, Darwin, at least in the Origin of Species, is ambiguous - almost as if, knowing he is on very slippery ground, he doesn't want his readers to recognize the full metaphysical and epistemological implications of his theory of the history of life. Because of this probably deliberate ambiguities that we variously interpreted Darwin as offering either a "progressive" view of evolution, something that we could adapt to the Enlightenment's faith in rational progress or, alternatively, as presenting a contingent view of the history of life, a story without progress or final goal or overall purposes.

Well, in Nietzsche (as in the view of Darwin) there is no such ambiguity. Darwin made his theory public for the first time in a paper delivered to the Linnean Society in 1858. The paper begins, “All nature is at war, one organism with another, or with external natures.” In the Origins of Species, Darwin is more specific about the character of this war, “There must be in every species, or with the individuals of distinct species, or with another of the same species, or with the individuals of distinct species, or with the physical conditions of life.” All these assumptions are apparent in Darwin’s definition of natural selection: If under changing conditions of life organic beings present individual differences in almost every part of their structure, and this cannot be disputed, if there be, owing to their geometrical rate of an increase, a severe struggle for life at some age, season, or year, and this cannot be disputed, as then, considering the infinite complexity of the relations of all organic beings to each other and to their condition of life . . . this will tend to produce offspring similarly characterized. This principle of preservation, or the survival of the fittest is so called the Natural Selection.

Similarly, clusters of distributed brain areas undertake individual linguistic symbols and are not produced in a particular area. The specific sound patterns of words may be produced in dedicated regions. Nevertheless, the symbolic and referential relationship between words is generated through a convergence of neural code and decoded from different and independent brain regions. The processes of words comprehension and retrieval result from combinations of simpler associative processes in several separate brain regions that require an active role from other regions. The symbol meaning of words, like the grammar that is essential for the construction of meaningful relationships between strings of words, is an emergent property from the complex interaction of several brain parts.

If we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the experience of the word symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing more about, the neuronal processes applied therein. While one mode of understanding the situation necessarily displaces the other, both are required to achieve a complete understanding of the situation.

With that, let us include two aspects of biological reality, its more complex order in biological reality may be associated with the emergence of new wholes that are greater than the parts, and the entire biosphere is a whole that displays regulating behaviour that is greater than the sum of its parts (the attributive view that all organisms (parts) are emergent aspects of the organizing process of life (whole), and that the proper way to understand the parts is to examine their embedded relations to the whole). If this is the case, the emergence of a symbolic universe based on a complex language system could be viewed as another stage in the evolution of more complex systems marked by the appearance of a new profound complementary relationship between parts and wholes. This does not allow us to assume that human consciousness was in any sense preordained or predestined by natural process. Nonetheless, it does make it possible, in philosophical terms at least, to argue that this consciousness is an emergent aspect of the organizing properties of biological life.

Another aspect of the evolution of a brain that allowed us to construct symbolic universes based on complex language systems that are particularly relevant for our purposes concerns consciousness of. Consciousness of as an independent agency or actor is predicted on a fundamental distinction or dichotomy between this and other selves. As it is constructed in human subjective reality, is perceived as having an independent existence and a -referential character in a mental realm as separately distinct from the material realm. It was, moreover the assumed separation between these realms that led Descartes to posit his dualism to understand the nature of consciousness in the mechanistic classical universe.

Every schoolchild learns eventually that Nietzsche was the author of the shocking slogan, "God is dead." However, what makes that statements possible are another claim, even more shocking in its implications: "Only that which has no history can be defined" (Genealogy of Morals). Since Nietzsche was the heir to seventy-five years of German historical scholarship, he knew that there was no such thing as something that has no history. Darwin had, as Dewey points out that essay we examined, effectively shown that searching for a true definition of a species is not only futile but unnecessary (since the definition of a species is something temporary, something that changes over time, without any permanent lasting and stable reality). Nietzsche dedicates his philosophical work to doing the same for all cultural values.

Reflecting it for a moment on the full implications of this claim our study of moral philosophy with the dialectic exchange with which explores the question "What is virtue?" That takes a firm withstanding until we can settle that of the issue with a definition that eludes all cultural qualification. What virtue is, that we cannot effectively deal with morality, accept through divine dispensation, unexamined reliance on traditions, skepticism, or relativism (the position of Thrasymachus). The full exploration of what dealing with that question of definition might require takes’ place in the Republic.

Many texts we read subsequently took up Plato's challenge, seeking to discover, through reason, a permanent basis for understanding knowledge claims and moral values. No matter what the method, as Nietzsche points out in his first section, the belief was always that grounding knowledge and morality in truth was possible and valuable, that the activity of seeking to ground morality was conducive to a fuller good life, individually and communally.

To use a favourite metaphor of Nietzsche's, we can say that previous systems of thought had sought to provide a true transcript of the book of nature. They made claims about the authority of one true text. Nietzsche insists repeatedly that there be no single canonical text; There are only interpretations. So, there is no appeal to some definitive version of Truth (whether we search in philosophy, religion, or science). Thus the Socratic quest for some way to tie morality down to the ground, so that it does not fly away, is (and has always been) futile, although the long history of attempts to do so has disciplined the European mind so that we, or a few of us, are ready to move into dangerous new territory where we can situate the most basic assumptions about the need for conventional morality to the test and move on "Beyond Good and Evil," that is, to a place where we do not take the universalizing concerns and claims of traditional morality seriously.

Nietzsche begins his critique here by challenging that fundamental assumption: Who says that seeking the truth is better for human beings? How do we know an untruth is not better? What is truth anyway? In doing so, he challenges the sense of purpose basic to the traditional philosophical endeavour. Philosophers, he points out early, may be proud of the way they begin by challenging and doubting received ideas. However, they never challenge or doubt the key notion they all start with, namely, that there is such a thing as the Truth and that it is something valuable for human beings (surely much more valuable than its opposite).

In other words, just as the development of the new science had gradually and for many painfully and rudely emptied nature of any certainty about a final purpose, about the possibilities for ever agreeing of the ultimate value of scientific knowledge, so Nietzsche is, with the aid of new historical science (and the proto-science of psychology) emptying all sources of cultural certainty of their traditional purposiveness and claims to permanent truth, and therefore of their value, as we traditionally understood that of the term. There is thus no antagonism between good and evil, since all versions of equal are equally fictive (although some may be more useful for the purposes of living than others).

At this lodging within space and time, In really do not want to analyse the various ways Nietzsche deals with this question. Nevertheless, In do want to insist upon the devastating nature of his historical critique on all previous systems that have claimed to ground knowledge and morality on a clearly defined truth of things. For Nietzsche's genius rests not only on his adopting the historical critique and applying to new areas but much more on his astonishing perspicuity in seeing just how extensive and flexible the historical method might be.

For example, Nietzsche, like some of those before him, insists that value systems are culturally determined. they arise, he insists, as often as not form or in reaction to conventional folk wisdom. Yet to this he adds something that to us, after Freud, may be well accepted, but in Nietzsche's hands become something as shocking: Understanding of a system of value is, he claims, requires us more than anything else to see it as the product of a particular individual's psychological history, a uniquely personal confession. Relationship to something called the "Truth" has nothing to do with the "meaning" of a moral system; as an alternative we seek its coherence in the psychology of the philosopher who produced it.

Gradually, in having grown into a greater clarity of what every great philosophy has endearingly become, as staying in the main theme of personal confessions, under which a kind of involuntary and an unconscious memoir and largely that the moral (or immoral) intentions in every philosophy formed the real germ of life from which the whole plant had grown.

A concentration has here unmasked claims to “truth” upon the history of the life of the person proposing the particular "truth" this time. Systems offering us a route to the Truth are simply psychologically produced fictions that serve the deep (often unconscious) purposes of the individual proposing them. Therefore they are what Nietzsche calls "foreground" truths. They do not penetrate into the deep reality of nature, and, yet, to fail to see this is to lack "perspective."

Even more devastating is Nietzsche's extension of the historical critique to language it. Since philosophical systems deliver themselves to us in language, that language shapes them and by the history of that language. Our Western preoccupation with the inner for which perceivable determinates, wills, and so forth, Nietzsche can place a value on as, in large part, the product of grammar, the result of a language that builds its statements around a subject and a predicate. Without that historical accident, Nietzsche affirms, we would not have committed an error into mistaking for the truth something that is a by-product of our particular culturally determined language system.

He makes the point, for example, that our faith in consciousness is just an accident. If instead of saying "In think," we were to say "Thinking is going on in my body," then we would not be tempted to give the "I" some independent existence (e.g., in the mind) and make large claims about the ego or the inner. The reason we do search for such an entity stem from the accidental construction of our language, which encourages us to use a subject (the personal pronoun) and a verb. The same false confidence in language also makes it easy for us to think that we know clearly what key things like "thinking" and "willing" are; Whereas, if we were to engage in even a little reflection, we would quickly realize that the inner processes neatly summed up by these apparently clear terms is anything but clear. His emphasis on the importance of psychology as queen of the sciences underscores his sense of how we need to understand more fully just how complex these activities are, particularly the emotional appetites, before we talk about them so simplistically, the philosophers that concurrently have most recently done.

This remarkable insight enables Nietzsche, for example, at one blow and with cutting contempt devastatingly to dismiss as "trivial" the system Descartes had set up so carefully in the Meditations. Descartes's triviality consists in failing to recognize how the language he imprisons, shapes his philosophical system as an educated European, using and by his facile treatment of what thinking is in the first place. The famous Cartesian dualism is not a central philosophical problem but an accidental by-product of grammar designed to serve Descartes' own particular psychological needs. Similarly Kant's discovery of "new faculties" Nietzsche derides as just a trick of language - a way of providing what looks like an explanation and is, in fact, as ridiculous as the old notions about medicines putting people to sleep because they have the sleeping virtue.

It should be clear from examples that there is very little capability of surviving Nietzsche's onslaught, for what are there to which we can points to which did not have a history or deliver it to us in a historically developing system of language? After all, our scientific enquiries in all areas of human experience teach us that nothing is ever, for everything is always becoming.

We might be tempted, as many have been, to point to the new natural science as a counter-instance that typifies the dulling of natural science of a progressive realization of the truth of the world, or at least a closer and closer approximation to that truth? In fact, it is interesting to think about just how closely Kuhn and Nietzsche might be linked in their views about the relationship between science and the truth of things or to what extent modern science might not provide the most promising refutation of Nietzsche's assertion that there is no privileged access to a final truth of things (a hotly disputed topic in the last decade or more). It is say here that for Nietzsche science is just another "foreground" way of interpreting nature. It has no privileged access to the Truth, although he does concede that, compared with other beliefs, it has the advantage of being based on sense experience and therefore is more useful for modern times.

There is one important point to stress in this review of the critical power of Nietzsche's project. Noting that Nietzsche is not calling us to a task for having beliefs is essential. We have to have beliefs. Human life must be the affirmation of values; Otherwise, it is not life. Nonetheless, Nietzsche is centrally concerned to mock us for believing that our belief systems are True, are fixed, are somehow eternally right by a grounded standard of knowledge. Human life, in its highest forms, must be lived in the full acceptance that the values we create for ourselves are fictions. We, or the best of us, have to have the courage to face the fact that there is no "Truth" upon which to ground anything in which we believe; we must in the full view of that harsh insight, but affirm ourselves with joy. The Truth is not accessible to our attempts at discovery; What thinking human beings characteristically do, in their pursuit of the Truth, is creating their own truths.

Now, this last point, like the others, has profound implications for how we think of ourselves, for our conception of the human. Because human individuals, like human cultures, also have a history. Each of us has a personal history, and thus we ourselves cannot be defined; we, too, are in a constant process of becoming, of transcending the person we have been into something new. We may like to think of ourselves as defined by some essential rational quality, but in fact we are not. In stressing this, of course, Nietzsche links him with certain strains of Romanticism, especially with William Blake and with Emerson and Thoreau.

This tradition of Romanticism holds up a view of life that is radically individualistic, -created, -generated. "In must create my own system or become enslaved by another man's" Blake wrote. It is also thoroughly aristocratic, with little room for traditional altruism, charity, or egalitarianism. Our lives to realize their highest potential should be lived in solitude from others, except perhaps those few we recognize as kindred souls, and our life's efforts must be a spiritually demanding but joyful affirmation of the process by which we maintain the vital development of our imaginative conceptions of ourselves.

Contrasting this view of the as a constantly developing entity might be appropriate here, without essential permanence, with Marx's view. Marx, too, insists on the process of transformation of the and ideas of the, but for him, as we discussed, the material forces control the transformation of production, and these in turn are driven by the logic of history. It is not something that the individual takes charge of by an act of individual will, because individual consciousness, like everything else, emerges from and is dependent upon the particular historical and material circumstances, the stage in the development of production, of the social environment in which the individual finds him or her.

Nietzsche, like Marx, and unlike later Existentialists, de Beauvoir, for example, recognizes that the individual inherits particular things from the historical moment of the culture (e.g., the prevailing ideas and, particularly, the language and ruling metaphors). Thus, for Nietzsche the individual is not totally free of all context. However, the appropriate response to this is not, as in Marx, the development of class consciousness, a solidarity with other citizens and an imperative to help history along by committing one to the class war alongside other proletarians, but in the best and brightest spirits, a call for a heightened sense of an individuality, of one's radical separation from the herd, of one's final responsibility to one's own most fecund creativity.

Because Nietzsche and the earlier Romantics are not simply saying, we should do what we like is vital. They all have a sense that -creation of the sort they recommend requires immense spiritual and emotional discipline -the discipline of the artist shaping his most important original creation following the stringent demands of his creative imagination. These demands may not be rational, but they are not permissively relativistic in that 1960's sense ("If it feels good, do it"). Permissiveness may have often been attributed to this Romantic tradition, a sort of 1960's “Boogie til you drop" ethic, but that is not what any of them had in mind. For Nietzsche that would simply be a herd response to a popularized and bastardized version of a much higher call to a solitary life lived with the most intense but personal joy, suffering, insight, courage, and imaginative discipline.

This aspect of Nietzsche's thought represents the fullest nineteenth-century European affirmation of a Romantic vision of the as radically individualistic (at the opposite end of the spectrum from Marx's views of the as socially and economically determined). It has had, a profound and lasting effect in the twentieth century as we become ever more uncertain about coherent social identities and thus increasingly inclined to look for some personal way to take full charge of our own identities without answering to anyone but ourselves.

Much of the energy and much of the humour in Nietzsche's prose comes from the urgency with which he sees such creative -affirmation as essential if the human species is not going to continue to degenerate. For Nietzsche, human beings are, primarily, biological creatures with certain instinctual drives. The best forms of humanity are those of whom most excellently express the most important of these biological drives, the "will to power," by which he means the individual will to assume of one and create what he or she needs, to live most fully. Such a "will to power" is beyond morality, because it does not answer to anyone's system of what makes up good and bad conduct. The best and strongest human beings are those of whom create a better quality in values for themselves, live by them, and refuse to acknowledge their common links with anyone else, other than other strong people who do the same and are thus their peers.

His surveys of world history have convinced Nietzsche that the development of systems has turned this basic human drive against human beings of morality favouring the weak, the suffering, the sick, the criminal, and the incompetent (all of whom he lumps together in that famous phrase "the herd"). He salutes the genius of those who could accomplish this feat (especially the Jews and Christians), which he sees as the revenge of the slaves against their natural masters. From this century -long acts of revenge, human beings are now filled with feelings of guilt, inadequacy, jealousy, and mediocrity, a condition alleviated, if at all, by dreams of being helpful to others and of an ever-expanding democracy, an agenda powerfully served by modern science (which serves to bring everything and everyone down to the same level). Fortunately, however, this ordeal has trained our minds splendidly, so that the best and brightest (the new philosophers, the free spirits) can move beyond the traditional boundaries of morality, that is, "beyond good and evil" (his favourite metaphor for this condition is the tensely arched bow ready to shoot off an arrow).

Stressing it is important, as In mentioned above, that Nietzsche does not believe that becoming such a "philosopher of the future" is easy or for everyone. It is, by contrast, an extraordinarily demanding call, and those few capable of responding to it might have to live solitary lives without recognition of any sort. He is demanding an intense spiritual and intellectual discipline that will enable the new spirit to move into territory no philosopher has ever roamed before, a displacing medium where there are no comfortable moral resting places and where the individual will probably (almost unquestionably) has to pursue of a profoundly lonely and perhaps dangerous existence (so the importance of another favourite metaphor of his, the mask). Nevertheless, this is the only way we can counter the increasing degeneration of European man into a practical, democratic, technocratic, altruistic herd animal.



Placing the analogy on the table, however, In wish to issue a caveat. Analogies may really help to clarify, but they can also influence us by some unduly persuasive influences of misleading proportions. In hope that the analogy In offer will provide such clarity, but not at the price of oversimplifying. So, as you listen to this analogy, you need to address the questions: To what extent does this analogy not hold? To what extent does it reduce the complexity of what Nietzsche is saying into a simpler form?

The analogy In want to put on the table is the comparison of human culture to a huge recreational complex in which several different games are going on. Outside people are playing soccer on one field, rugby on another, American football on another, and Australian football on another, and so on. In the club house different groups of people are playing chess, dominoes, poker, and so on. There are coaches, spectators, trainers, and managers involved in each game. Surrounding the recreation complex is wilderness.

These games we might use to characterize different cultural groups: French Catholics, German Protestants, scientists, Enlightenment rationalists, European socialists, liberal humanitarians, American democrats, free thinkers, or what possesses you. The variety represents the rich diversity of intellectual, ethnic, political, and other activities.

The situation is not static of course. Some games have far fewer players and fans, and the popularity is shrinking; Some are gaining popularity rapidly and increasingly taking over parts of the territory available. Thus, the traditional sport of Aboriginal lacrosse is but a small remnant of what it was before contact. However, the Democratic capitalist game of baseball is growing exponentially, as is the materialistic science game of archery. They might combine their efforts to create a new game or merge their leagues.

When Nietzsche looks at Europe historically, what he sees is that different games have been going on like this for centuries. He further sees that many participants in anyone game has been aggressively convinced that their game is the "true" game, which it corresponds with the essence of games or is a close match to the wider game they imagine going on in the natural world, in the wilderness beyond the playing fields. So they have spent much time producing their rule books and coaches' manuals and making claims about how the principles of their game copy or reveal or approximate the laws of nature. This has promoted and still promotes a good deal of bad feeling and fierce arguments. Therefore, in addition anyone game it, within the group pursuing it there has always been all sorts of sub-games debating the nature of the activity, refining the rules, arguing over the correct version of the rule book or about how to educate the referees and coaches, and so on.

Nietzsche's first goal is to attack this dogmatic claim about the truth of the rules of any particular game. He does this, in part, by appealing to the tradition of historical scholarship that shows that these games are not eternally true, but have a history. Rugby began when a soccer player broke the rules and picked up the ball and ran with it. American football developed out of rugby and has changed and is still changing. Basketball had a precise origin that can be historically found.

Rule books are written in languages that have a history by people with a deep psychological point to prove: The games are an unconscious expression of the particular desires of inventive game’s people at a very particular historical moment; these rule writers are called Plato, Augustine, Socrates, Kant, Schopenhauer, Descartes, Galileo, and so on. For various reasons they believe, or claim to believe, that the rules they come up with reveals something about the world beyond the playing field and are therefore "true" in a way that other rule books are not; they have, as it was, privileged access to reality and thus record, to use a favourite metaphor of Nietzsche's, the text of the wilderness.

In attacking such claims, Nietzsche points out, the wilderness bears no relationship at all to any human invention like a rule book; He points out that nature is "wasteful beyond measure, without purposes and consideration, without mercy and justice, fertile and desolate and uncertain simultaneously: Imagine indifference it as a power -how could you live according to this indifference. Living-is that not precisely wanting to be other than this nature.” Because there is no connection with what nature truly is, such rule books are mere "foreground" pictures, fictions dreamed up, reinforced, altered, and discarded for contingent historical reasons. Moreover, the rule manuals often bear a suspicious resemblance to the rules of grammar of a culture, thus, for example, the notion of an ego as a thinking subject, Nietzsche points out, is closely tied to the rules of European languages that insist on a subject and verb construction as an essential part of any statement.

So how do we know what we have is the truth? Why do we want the truth, anyway? People seem to need to believe that their games are true, but why? Might they not be better if they accepted that their games were false, were fictions, deal with the reality of nature beyond the recreational complex? If they understood the fact that everything they believe in has a history and that, as he says in the Genealogy of Morals, "only that which has no history can be defined," they would understand that all this proud history of searching for the truth is something quite different from what philosophers who have written rule books proclaim.

Furthermore these historical changes and developments occur accidentally, for contingent reasons, and have nothing to do with the games, or anyone game, shaping it according to any ultimate game or any given rule book of games given by the wilderness, which is indifferent to what is going on. There is no basis for the belief that, if we look at the history of the development of these games, we discover some progressive evolution of games toward some higher type. We may be able, like Darwin, to trace historical genealogies, to construct a narrative, but that narrative does not reveal any clear direction or any final goal or any progressive development. The genealogy of games suggests that history be a record of contingent change. The assertion that there is such a thing as progress is simply another game, another rule added by inventive minds (who need to believe in progress); it bears no relationship to nature beyond the sports complex.

While one is playing on a team, one follows the rules and thus has a sense of what form right and wrong or good and evil conduct in the game. All those carrying out the same endeavour share this awareness. To pick up the ball in soccer is evil (unless you are the goalie), and to punt the ball while running in American football is permissible but stupid; in Australian football both actions are essential and right. In other words, different cultural communities have different standards of right and wrong conduct. The artificial inventions have determined these called rule books, one for each game. These rule books have developed the rules historically; Thus, they have no permanent status and no claim to privileged access.

Now, at this point you might be thinking about the other occasion in which In introduced a game analogy, namely, in the discussions of Aristotle's Ethics. For Aristotle also acknowledges that different political systems have different rules of conduct. Still, Aristotle believes that an examination of different political communities will enable one to derive certain principles common to them all, bottom-up generalizations that will then provide the basis for reliable rational judgment on which game is being played better, on what was good play in any particular game, on whether or not a particular game is being conducted well or not.

In other words, Aristotle maintains that there is a way of discovering and appealing to some authority outside any particular game to adjudicate moral and knowledge claims that arise in particular games or in conflicts between different games. Plato, of course, also believed in the existence of such a standard, but proposed a different route to discovering it.

Now Nietzsche emphatically denies this possibility. Anyone who tries to do what Aristotle recommends is simply inventing another game (we can call it Super-sport) and is not discovering anything true about the real nature of games because they do not organize reality (that has the wilderness surrounding us) as a game. In fact, he argues, that we have created this recreational complex and all the activities that go on in it to protect themselves from nature (which is indifferent to what we do with our lives), not to copy some recreational rule book that wilderness reveals. Human culture exists as an affirmation of our opposition or to contrast with nature, not as an extension of rules that include both human culture and nature. That is why falsehoods about nature might be a lot more useful than truths, if they enable us to live more fully human lives.

If we think of the wilderness as a text about reality, as the truth about nature, then, Nietzsche claims, we have no access at all to that text. What we do have accessed to conflicting interpretations, none of them based on privileged access to a "true" text. Thus, the soccer players may think them and their game is superior to rugby and the rugby players, because soccer more closely represents the surrounding wilderness, but such statements about better and worse are irrelevant. There is nothing a rule bound outside the games themselves. Therefore, all dogmatic claims about the truth of all games or any particular game are false.

Now, how did this situation come about? Well, there was a time when all Europeans played almost the same game and had done so for many years. Having little-to-no historical knowledge and sharing the same head coach in the Vatican and the same rule book, they believed that the game was the only one possible and had been around for ever. So they naturally believed that their game was true. They shored up that belief with appeals to scripture or to eternal forms, or universal principles or to rationality or science or whatever. There were many quarrels about the nature of ultimate truth, that is, about just how one should tinker with the rule book, about what provided access to God's rules, but there was agreement that such excess must exist.

Take, for example, the offside rule in soccer. Without that the game could not continue in its traditional way. Therefore, soccer players see the offside rule as an essential part of their reality, and since soccer is the only game in town and we have no idea of its history (which might, for example, tell us about the invention of the off-side rule), then the offside rule is easy to interpret as a universal, a requirement for social activity, and we will find and endorse scriptural texts that reinforce that belief. Our scientists will devote their time to linking the offside rule with the mysterious rumblings that come from the forest. From this, one might be led to conclude that the offside rule is a Law of Nature, something that extends far beyond the realms of our particular game into all possible games and, beyond those, into the realm of the wilderness it.

Of course, there were powerful social and political forces (the coach and trainers and owners of the team) who made sure that people had lots of reasons for believing in the unchanging verity of present arrangements. So it is not surprising that we find plenty of learned books, training manuals, and locker room exhortations urging everyone to remember the offside rule and to castigate as "bad" those who routinely forget that part of the game. We will also worship those who died in defence of the offside rule. Naturally any new game that did not recognize the offside rule would be a bad game, an immoral way to conduct one. So if some group tried to start a game with a different offside rule, that group would be attacked because they had violated a rule of nature and were thus immoral.

However, for contingent historical reasons, Nietzsche argues, that situation of one game in town did not last. The recreational unity of the area divides the developments in historical scholarships into past demonstrations, in that all too clearly there is an overwhelming amount of evidence that all the various attempts to show that one specific game was exempted over any of all other true games, as they are false, dogmatic, trivial, deceiving, and so on.

For science has revealed that the notion of a necessary connection between the rules of any game and the wider purposes of the wilderness is simply an ungrounded assertion. There is no way in which we can make the connections between the historically derived fictions in the rule book and the mysterious and ultimately unknowable directions of irrational nature. To conform of science, we have to believe in causes and effects, but there is no way we can prove that this is a true belief and there is a danger for us if we simply ignore that fact. Therefore, we cannot prove a link between the game and anything outside it. History has shown us, just as Darwin's natural history has proved, that all apparently eternal issues have a story, a line of development, a genealogy. Thus, notions, like species, have no reality-they are temporary fiction imposed for the sake of defending a particular arrangement.

So, God is dead. There is no eternal truth anymore, no rule book in the sky, no ultimate referee or international Olympic committee chair. Nietzsche did not kill God; History and the new science did. Nietzsche is only the most passionate and irritating messenger, announcing over the PA system to anyone who will listen that an appeal to a system can defend someone like Kant or Descartes or Newton who thinks that what he or she is doing grounded in the truth of nature has simply been mistaken.

This insight is obvious to Nietzsche, and he is troubled that no one is worried about it or even to have noticed it. So he's moved to call the matter to our attention as stridently as possible, because he thinks that this realization requires a fundamental shift in how we live our lives.

For Nietzsche Europe is in crisis. It has a growing power to make life comfortable and an enormous energy. However, people seem to want to channel that energy into arguing about what amounts to competing fictions and to force everyone to follow particular fictions.

Why is this insight so worrying? Well, one point is that dogmatists get aggressive. Soccer players and rugby players who forget what Nietzsche is pointing out can start killing each other over questions that admit of no answer, namely, question about which group has the true game, which ordering has a privileged accountability to the truth. Nietzsche senses that dogmatism is going to lead to warfare, and he predicts that the twentieth century will see an unparalleled extension of warfare in the name of competing dogmatic truths. Part of his project is to wake up the people who are intelligent enough to respond to what he is talking about so that they can recognize the stupidity of killing each other for an illusion that they misunderstand for some "truth."

Besides that, Nietzsche, like Mill (although, in a very different way), is seriously concerned about the possibilities for human excellence in a culture where the herd mentality is taking over, where Europe is developing into competing herds -a situation that is either sweeping up the best and the brightest or stifling them entirely. Nietzsche, like Mill and the ancient pre-Socratic Greeks to whom he constantly refers, is an elitist. He wants the potential for individual human excellence to be liberated from the harnesses of conformity and group competition and conventional morality. Otherwise, human beings are going to become destructive, lazy, conforming herd animals, using technology to divert them from the greatest joys in life, which come only from individual striving and creativity, activities that require one to release one's instincts without keeping them eternally subjugated to controlling historical consciousness or a conventional morality of good and evil.

What makes this particularly a problem for Nietzsche is that he sees that a certain form of game is gaining popularity: Democratic volleyball. In this game, the rule book insists that all players be treated equally, that there be no natural authority given to the best players or to those who understand the nature of quality play. Therefore the mass of inferior players is taking over, the quality of the play is deteriorating, and there are fewer and fewer good volleyball players. This process is being encouraged both by the traditional ethic of "help your neighbour," now often in a socialist uniform and by modern science. As the mass of more many inferior players takes over the sport, the mindless violence of their desires to attack other players and take over their games increases, as does their hostility to those who are uniquely excellent (who may need a mask to prevent themselves being recognized).

The hopes for any change in this development are not good. In fact, things might be getting worse. For when Nietzsche looks at all these games going on he notices certain groups of people, and the prospect is not totally reassuring.

First there remain the overwhelming majority of people: the players and the spectators, those caught up in their particular sport. These people are, for the most part, continuing as before without reflecting or caring about what they do. They may be vaguely troubled about rumours they hear that their game is not the best, they may be bored with the endless repetition in the schedule, and they have essentially reconciled them that they are not the only game going on, but they had rather not thought about it. Or else, stupidly confident that what they are doing is what really matters about human life, is true, they preoccupy themselves with tinkering with the rules, using the new technology to get better balls, more comfortable seats, louder whistles, more brightly painted side lines, more trendy uniforms, tastier Gatorade-all in the name of progress.

Increasing numbers of people are moving into the stands or participating through the newspaper or the television sets. Most people are thus, in increasing numbers, losing touch with themselves and their potential as instinctual human beings. They are the herd, the last men, preoccupied with the trivial, unreflectingly conformist because they think, to the extent they think at all, that what they do will bring them something called "happiness." Yet they are not happy: They are in a permanent state of narcotized anxiety, seeking new ways to entertain themselves with the steady stream of marketed distractions that the forces of the market produce: Technological toys, popular entertainment, college education, Wagner's operas, academic jargon.

This group, of course, includes all the experts in the game, the cheerleaders whose job it is to keep us focussed on the seriousness of the activity, the sports commentators and pundits, whose life is bound up with interpreting, reporting, and classifying players and contests. These sportscasters are, in effect, the academics and government experts, the John Maddens and Larry Kings and Mike Wallaces of society, those demigods of the herd, whose authority derives from the false notion that what they are dealing with is something other than a social-fiction.

There is a second group of people, who have accepted the ultimate meaninglessness of the game in which they were. They have moved to the sidelines, not as spectators or fans, but as critics, as cynics or nihilists, dismissing out of hand all the pretensions of the players and fans, but not affirming anything themselves. These are the souls who, having nothing to will (because they have seen through the fiction of the game and have therefore no motive to play any more), prefer to will nothing in a state of paralysed skepticism. Nietzsche has a certain admiration for these people, but maintains that a life like this, the nihilist on the sidelines, is not a human life.

For, Nietzsche insists, to live as a human being, is to play a game. Only in playing a game can one affirm one's identity, can one create values, can one truly exist. Games are the expression of our instinctual human energies, our living drives, what Nietzsche calls our "will to power." So the nihilistic stance, though understandable and, in a sense, courageous, is sterile. For we are born to play, and if we do not, then we are not fulfilling a worthy human function. Also, we have to recognize that all games are equally fictions, invented human constructions without any connections to the reality of things.

So we arrive at the position of the need to affirm a belief (invent a rule book) which we know to have been invented, to be divorced from the truth of things. To play the best game is to live by rules that we invent for ourselves as an assertion of our instinctual drives and to accept that the rules are fictions: they matter, we accept them as binding, we judge ourselves and others by them, and yet we know they are artificial. Just as in real life a normal soccer player derives a sense of meaning during the game, affirms his or her value in the game, without ever once believing that the rules of soccer have organized the universe or that those rules have any universal validity, so we must commit ourselves to epistemological and moral rules that enable us to live our lives as players, while simultaneously recognizing that these rules have no universal validity.

No comments:

Post a Comment