June 23, 2010

-page 61-

Some philosophers think that the category of knowing for which is true. Justified believing (accepting) is a requirement constituting only a species of Propositional knowledge, construed as an even broader category. They have proposed various examples of ‘PK’ that do not satisfy the belief and/ort justification conditions of the tripartite analysis. Such cases are often recognized by analyses of Propositional knowledge in terms of powers, capacities, or abilities. For instance, Alan R. White (1982) treats ‘PK’ as merely the ability to provide a correct answer to a possible questions, however, White may be equating ‘producing’ knowledge in the sense of producing ‘the correct answer to a possible question’ with ‘displaying’ knowledge in the sense of manifesting knowledge. (White, 1982). The latter can be done even by very young children and some non-human animals independently of their being asked questions, understanding questions, or recognizing answers to questions. Indeed, an example that has been proposed as an instance of knowing that ‘h’ without believing or accepting that ‘h’ can be modified so as to illustrate this point. Two examples concerns an imaginary person who has no special training or information about horses or racing, but who in an experiment persistently and correctly picks the winners of upcoming horseraces. If the example is modified so that the hypothetical ‘seer’ never picks winners but only muses over whether those horses wight win, or only reports those horses winning, this behaviour should be as much of a candidate for the person’s manifesting knowledge that the horse in question will win as would be the behaviour of picking it as a winner.


These considerations expose limitations in Edward Craig’s analysis (1990) of the concept of knowing of a person’s being a satisfactory informant in relation to an inquirer who wants to find out whether or not ‘h’. Craig realizes that counterexamples to his analysis appear to be constituted by Knower who are too recalcitrant to inform the inquirer, or too incapacitate to inform, or too discredited to be worth considering (as with the boy who cried ‘Wolf’). Craig admits that this might make preferable some alternative view of knowledge as a different state that helps to explain the presence of the state of being a suitable informant when the latter does obtain. Such the alternate, which offers a recursive definition that concerns one’s having the power to proceed in a way representing the state of affairs, causally involved in one’s proceeding in this way. When combined with a suitable analysis of representing, this theory of propositional knowledge can be unified with a structurally similar analysis of knowing how to do something.

Knowledge and belief, according to most epistemologists, knowledge entails belief, so that I cannot know that such and such is the case unless I believe that such and such is the case. Others think this entailment thesis can be rendered more accurately if we substitute for belief some closely related attitude. For instance, several philosophers would prefer to say that knowledge entail psychological certainties (Prichard, 1950 and Ayer, 1956) or conviction (Lehrer, 1974) or acceptance (Lehrer, 1989). None the less, there are arguments against all versions of the thesis that knowledge requires having a belief-like attitude toward the known. These arguments are given by philosophers who think that knowledge and belief (or a facsimile) are mutually incompatible (the incomparability thesis), or by ones who say that knowledge does not entail belief, or vice versa, so that each may exist without the other, but the two may also coexist (the separability thesis).

The incompatibility thesis is sometimes traced to Plato (429-347 Bc) in view of his claim that knowledge is infallible while belief or opinion is fallible (‘Republic’ 476-9). But this claim would not support the thesis. Belief might be a component of an infallible form of knowledge in spite of the fallibility of belief. Perhaps, knowledge involves some factor that compensates for the fallibility of belief.

A. Duncan-Jones (1939: Also Vendler, 1978) cite linguistic evidence to back up the incompatibility thesis. He notes that people often say ‘I do not believe she is guilty. I know she is’ and the like, which suggest that belief rule out knowledge. However, as Lehrer (1974) indicates, the above exclamation is only a more emphatic way of saying ‘I do not just believe she is guilty, I know she is’ where ‘just’ makes it especially clear that the speaker is signalling that she has something more salient than mere belief, not that she has something inconsistent with belief, namely knowledge. Compare: ‘You do not hurt him, you killed him.'

H.A. Prichard (1966) offers a defence of the incompatibility thesis that hinges on the equation of knowledge with certainty (both infallibility and psychological certitude) and the assumption that when we believe in the truth of a claim we are not certain about its truth. Given that belief always involves uncertainty while knowledge never dies, believing something rules out the possibility of knowing it. Unfortunately, however, Prichard gives ‘us’ no goods reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, to suggest that we cease to believe things about which we are completely confident is bizarre.

A.D. Woozley (1953) defends a version of the separability thesis. Woozley’s version, which deals with psychological certainty rather than belief per se, is that knowledge can exist in the absence of confidence about the item known, although might also be accompanied by confidence as well. Woozley remarks that the test of whether I know something is ‘what I can do, where what I can do may include answering questions.’ On the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people who give correct responses on examinations even if those people show no confidence in their answers. Woozley acknowledges, however, that it would be odd for those who lack confidence to claim knowledge. It would be peculiar to say, I am unsure whether my answer is true: Still, I know it is correct But this tension Woozley explains using a distinction between conditions under which we are justified in making a claim (such as a claim to know something), and conditions under which the claim we make is true. While ‘I know such and such’ might be true even if I am unsure whether such and such holds, nonetheless it would be inappropriate for me to claim that I know that such and such unless I were sure of the truth of my claim.

Colin Radford (1966) extends Woozley’s defence of the separability thesis. In Radford’s view, not only is knowledge compatible with the lack of certainty, it is also compatible with a complete lack of belief. He argues by example. In one example, Jean has forgotten that he learned some English history year’s priori and yet he is able to give several correct responses to questions such as ‘When did the Battle of Hastings occur?’ Since he forgot that he took history, he considers the correct response to be no more than guesses. Thus, when he says that the Battle of Hastings took place in 1066 he would deny having the belief that the Battle of Hastings took place in 1066. A disposition he would deny being responsible (or having the right to be convincing) that 1066 was the correct date. Radford would none the less insist that Jean know when the Battle occurred, since clearly be remembering the correct date. Radford admits that it would be inappropriate for Jean to say that he knew when the Battle of Hastings occurred, but, like Woozley he attributes the impropriety to a fact about when it is and is not appropriate to claim knowledge. When we claim knowledge, we ought, at least to believe that we have the knowledge we claim, or else our behaviour is ‘intentionally misleading’.

Those that agree with Radford’s defence of the separability thesis will probably think of belief as an inner state that can be detected through introspection. That Jean lack’s beliefs about English history is plausible on this Cartesian picture since Jean does not find himself with any beliefs about English history when ne seek them out. One might criticize Radford, however, by rejecting that Cartesian view of belief. One could argue that some beliefs are thoroughly unconscious, for example. Or one could adopt a behaviourist conception of belief, such as Alexander Bain’s (1859), according to which having beliefs is a matter of the way people are disposed to behave (and has not Radford already adopted a behaviourist conception of knowledge?) Since Jean gives the correct response when queried, a form of verbal behaviour, a behaviourist would be tempted to credit him with the belief that the Battle of Hastings occurred in 1066.

D.M. Armstrong (1873) takes a different tack against Radford. Jean does know that the Battle of Hastings took place in 1066. Armstrong will grant Radfod that point, in fact, Armstrong suggests that Jean believe that 1066 is not the date the Battle of Hastings occurred, for Armstrong equates the belief that such and such is just possible but no more than just possible with the belief that such and such is not the case. However, Armstrong insists, Jean also believes that the Battle did occur in 1066. After all, had Jean been mistaught that the Battle occurred in 1066, and subsequently ‘guessed’ that it took place in 1066, we would surely describe the situation as one in which Jean’s false belief about the Battle became unconscious over time but persisted of a memory trace that was causally responsible for his guess. Out of consistency, we must describe Radford’s original case as one that Jean’s true belief became unconscious but persisted long enough to cause his guess. Thus, while Jean consciously believes that the Battle did not occur in 1066, unconsciously he does believe it occurred in 1066. So after all, Radford does not have a counterexample to the claim that knowledge entails belief.

Armstrong’s response to Radford was to reject Radford’s claim that the examinee lacked the relevant belief about English history. Another response is to argue that the examinee lacks the knowledge Radford attributes to him (cf. Sorenson, 1982). If Armstrong is correct in suggesting that Jean believes both that 1066 is and that it is not the date of the Battle of Hastings, one might deny Jean knowledge on the grounds that people who believe the denial of what they believe cannot be said t know the truth of their belief. Another strategy might be to compare the examine case with examples of ignorance given in recent attacks on externalist accounts of knowledge (needless to say. Externalists themselves would tend not to favour this strategy). Consider the following case developed by BonJour (1985): For no apparent reason, Samantha believes that she is clairvoyant. Again, for no apparent reason, she one day comes to believe that the President is in New York City, even though she has every reason to believe that the President is in Washington, DC. In fact, Samantha is a completely reliable clairvoyant, and she has arrived at her belief about the whereabouts of the President thorough the power of her clairvoyance. Yet surely Samantha’s belief is completely irrational. She is not justified in thinking what she does. If so, then she does not know where the President is. But Radford’s examinee is unconventional. Even if Jean lacks the belief that Radford denies him, Radford does not have an example of knowledge that is unattended with belief. Suppose that Jean’s memory had been sufficiently powerful to produce the relevant belief. As Radford says, in having every reason to suppose that his response is mere guesswork, and he has every reason to consider his belief false. His belief would be an irrational one, and hence one about whose truth Jean would be ignorant.

Least has been of mention to an approaching view from which ‘perception’ basis upon itself as a fundamental philosophical topic both for its central place in ant theory of knowledge, and its central place un any theory of consciousness. Philosophy in this area is constrained by a number of properties that we believe to hold of perception, (1) It gives ‘us’ knowledge of the world around ‘us,’ (2) We are conscious of that world by being aware of ‘sensible qualities’: Colour, sounds, tastes, smells, felt warmth, and the shapes and positions of objects in the environment. (3) Such consciousness is effected through highly complex information channels, such as the output of the three different types of colour-sensitive cells in the eye, or the channels in the ear for interpreting pulses of air pressure as frequencies of sound. (4) There ensues even more complex neurophysiological coding of that information, and eventually higher-order brain functions bring it about that we interpreted the information so received. (Much of this complexity has been revealed by the difficulties of writing programs enabling computers to recognize quite simple aspects of the visual scene.) The problem is to avoid thinking of here being a central, ghostly, conscious self, fed information in the same way that a screen if fed information by a remote television camera. Once such a model is in place, experience will seem like a veil getting between ‘us’ and the world, and the direct objects of perception will seem to be private items in an inner theatre or sensorium. The difficulty of avoiding this model is epically cute when we considered the secondary qualities of colour, sound, tactile feelings and taste, which can easily seem to have a purely private existence inside the perceiver, like sensation of pain. Calling such supposed items names like ‘sense-data’ or ‘percepts’ exacerbates the tendency, but once the model is in place, the first property, that perception gives ‘us’ knowledge of the world and its surrounding surfaces, is quickly threatened, for there will now seem little connection between these items in immediate experience and any independent reality. Reactions to this problem include ‘scepticism’ and ‘idealism.’

A more hopeful approach is to claim that the complexities of (3) and (4) explain how we can have direct acquaintance of the world, than suggesting that the acquaintance we do have been at best indirect. It is pointed out that perceptions are not like sensation, precisely because they have a content, or outer-directed nature. To have a perception is to be aware of the world for being such-and-such a way, than to enjoy a mere modification of sensation. But such direct realism has to be sustained in the face of the evident personal (neurophysiological and other) factors determining haw we perceive. One approach is to ask why it is useful to be conscious of what we perceive, when other aspects of our functioning work with information determining responses without any conscious awareness or intervention. A solution to this problem would offer the hope of making consciousness part of the natural world, than a strange optional extra.

Furthering, perceptual knowledge is knowledge acquired by or through the senses and includes most of what we know. We cross intersections when we see the light turn green, head for the kitchen when we smell the roast burning, squeeze the fruit to determine its ripeness, and climb out of bed when we hear the alarm ring. In each case we come to know something-that the light has turned green, that the roast is burning, that the melon is overripe, and that it is time to get up-by some sensory means. Seeing that the light has turned green is learning something-that the light has turned green-by use of the eyes. Feeling that the melon is overripe is coming to know a fact-that the melon is overripe-by one’s sense to touch. In each case the resulting knowledge is somehow based on, derived from or grounded in the sort of experience that characterizes the sense modality in question.

Much of our perceptual knowledge is indirect, dependent or derived. By this I mean that the facts we describe ourselves as learning, as coming to know, by perceptual means are pieces of knowledge that depend on our coming to know something else, some other fact, in a more direct way. We see, by the gauge, that we need gas, see, by the newspapers, that our team has lost again, see, by her expression, that she is nervous. This derived or dependent sort of knowledge is particularly prevalent in the cases of vision, but it occurs, to a lesser degree, in every sense modality. We install bells and other noise-makers so that we calm for example, hear (by the bell) that someone is at the door and (by the alarm) that its time to get up. When we obtain knowledge in this way, it is clear that unless one sees-hence, comes to know something about the gauge (that it says) and (hence, know) that one is described as coming to know by perceptual means. If one cannot hear that the bell is ringing, one cannot-in at least in this way-hear that one’s visitors have arrived. In such cases one sees (hears, smells, etc.) that ‘a’ is ‘F’, coming to know thereby that ‘a’ is ‘F’, by seeing (hearing, etc.) that some other condition, ‘b’s’ being ‘G’, obtains when this occurs, the knowledge (that ‘a’ is ‘F’) is derived from, or dependent on, the more basic perceptual knowledge that ‘b’ is ‘G’.

And finally, the representational Theory of mind (RTM) (which goes back at least to Aristotle) takes as its starting point commonsense mental states, such as thoughts, beliefs, desires, perceptions and images. Such states are said to have ‘intentionality’ - they are about or refer to things, and may be evaluated with respect to properties like consistency, truth, appropriateness and accuracy. (For example, the thought that cousins are not related is inconsistent, the belief that Elvis is dead is true, the desire to eat the moon is inappropriate, a visual experience of a ripe strawberry as red is accurate, an image of George W. Bush with deadlocks is inaccurate.)

The Representational Theory of Mind, defines such intentional mental states as relations to mental representations, and explains the intentionality of the former in terms of the semantic properties of the latter. For example, to believe that Elvis is dead is to be appropriately related to a mental representation whose propositional content is that Elvis is dead. (The desire that Elvis be dead, the fear that he is dead, the regret that he is dead, etc., involve different relations to the same mental representation.) To perceive a strawberry is to have a sensory experience of some kind which is appropriately related to (e.g., caused by) the strawberry Representational theory of mind also understands mental processes such as thinking, reasoning and imagining as sequences of intentional mental states. For example, to imagine the moon rising over a mountain is to entertain a series of mental images of the moon (and a mountain). To infer a proposition q from the propositions p and if 'p' then 'q' is (among other things) to have a sequence of thoughts of the form 'p', 'if p' then 'q', 'q'.

Contemporary philosophers of mind have typically supposed (or at least hoped) that the mind can be naturalized - i.e., that all mental facts have explanations in the terms of natural science. This assumption is shared within cognitive science, which attempts to provide accounts of mental states and processes in terms (ultimately) of features of the brain and central nervous system. In the course of doing so, the various sub-disciplines of cognitive science (including cognitive and computational psychology and cognitive and computational neuroscience) postulate a number of different kinds of structures and processes, many of which are not directly implicated by mental states and processes as commonsensical conceived. There remains, however, a shared commitment to the idea that mental states and processes are to be explained in terms of mental representations.

In philosophy, recent debates about mental representation have centred around the existence of propositional attitudes (beliefs, desires, etc.) and the determination of their contents (how they come to be about what they are about), and the existence of phenomenal properties and their relation to the content of thought and perceptual experience. Within cognitive science itself, the philosophically relevant debates have been focussed on the computational architecture of the brain and central nervous system, and the compatibility of scientific and commonsense accounts of mentality.

Intentional Realists such as Dretske (e.g., 1988) and Fodor (e.g., 1987) note that the generalizations we apply in everyday life in predicting and explaining each other's behaviour (often collectively referred to as ‘folk psychology’) are both remarkably successful and indispensable. What a person believes, doubts, desires, fears, etc. is a highly reliable indicator of what that person will do; and we have no other way of making sense of each other's behaviour than by ascribing such states and applying the relevant generalizations. We are thus committed to the basic truth of commonsense psychology and, hence, to the existence of the states its generalizations refer to. (Some realists, such as Fodor, also hold that commonsense psychology will be vindicated by cognitive science, given that propositional attitudes can be construed as computational relations to mental representations.)

Intentional Eliminativists, such as Churchland, (perhaps) Dennett and (at one time) Stich argue that no such things as propositional attitudes (and their constituent representational states) are implicated by the successful explanation and prediction of our mental lives and behaviour. Churchland denies that the generalizations of commonsense propositional-attitude psychology are true. He (1981) argues that folk psychology is a theory of the mind with a long history of failure and decline, and that it resists incorporation into the framework of modern scientific theories (including cognitive psychology). As such, it is comparable to alchemy and phlogiston theory, and ought to suffer a comparable fate. Commonsense psychology is false, and the states (and representations) it postulates simply don't exist. (It should be noted that Churchland is not an eliminativist about mental representation tout court.

Dennett (1987) grants that the generalizations of commonsense psychology are true and indispensable, but denies that this is sufficient reason to believe in the entities they appear to refer to. He argues that to give an intentional explanation of a system's behaviour is merely to adopt the ‘intentional stance’ toward it. If the strategy of assigning contentful states to a system and predicting and explaining its behaviour (on the assumption that it is rational - i.e., that it behaves as it should, given the propositional attitudes it should have in its environment) is successful, then the system is intentional, and the propositional-attitude generalizations we apply to it are true. But there is nothing more to having a propositional attitude than this.

Though he has been taken to be thus claiming that intentional explanations should be construed instrumentally, Dennett (1991) insists that he is a ‘moderate’ realist about propositional attitudes, since he believes that the patterns in the behaviour and behavioural dispositions of a system on the basis of which we (truly) attribute intentional states to it are objectively real. In the event that there are two or more explanatorily adequate but substantially different systems of intentional ascriptions to an individual, however, Dennett claims there is no fact of the matter about what the system believes (1987, 1991). This does suggest an irrealism at least with respect to the sorts of things Fodor and Dretske take beliefs to be; though it is not the view that there is simply nothing in the world that makes intentional explanations true.

(Davidson 1973, 1974 and Lewis 1974 also defend the view that what it is to have a propositional attitude is just to be interpretable in a particular way. It is, however, not entirely clear whether they intend their views to imply irrealism about propositional attitudes.). Stich (1983) argues that cognitive psychology does not (or, in any case, should not) taxonomize mental states by their semantic properties at all, since attribution of psychological states by content is sensitive to factors that render it problematic in the context of a scientific psychology. Cognitive psychology seeks causal explanations of behaviour and cognition, and the causal powers of a mental state are determined by its intrinsic ‘structural’ or ‘syntactic’ properties. The semantic properties of a mental state, however, are determined by its extrinsic properties - e.g., its history, environmental or intra-mental relations. Hence, such properties cannot figure in causal-scientific explanations of behaviour. (Fodor 1994 and Dretske 1988 are realist attempts to come to grips with some of these problems.) Stich proposes a syntactic theory of the mind, on which the semantic properties of mental states play no explanatory role.

It is a traditional assumption among realists about mental representations that representational states come in two basic varieties (Boghossian 1995). There are those, such as thoughts, which are composed of concepts and have no phenomenal (‘what-it's-like’) features (‘qualia’), and those, such as sensory experiences, which have phenomenal features but no conceptual constituents. (Non-conceptual content is usually defined as a kind of content that states of a creature lacking concepts might nonetheless enjoy. On this taxonomy, mental states can represent either in a way analogous to expressions of natural languages or in a way analogous to drawings, paintings, maps or photographs. (Perceptual states such as seeing that something is blue, are sometimes thought of as hybrid states, consisting of, for example, a Non-conceptual sensory experience and a thought, or some more integrated compound of sensory and conceptual components.)

Some historical discussions of the representational properties of mind (e.g., Aristotle 1984, Locke 1689/1975, Hume 1739/1978) seem to assume that Non-conceptual representations - percepts (‘impressions’), images (‘ideas’) and the like - are the only kinds of mental representations, and that the mind represents the world in virtue of being in states that resemble things in it. On such a view, all representational states have their content in virtue of their phenomenal features. Powerful arguments, however, focussing on the lack of generality (Berkeley 1975), ambiguity (Wittgenstein 1953) and non-compositionality (Fodor 1981) of sensory and imagistic representations, as well as their unsuitability to function as logical (Frége 1918/1997, Geach 1957) or mathematical (Frége 1884/1953) concepts, and the symmetry of resemblance (Goodman 1976), convinced philosophers that no theory of mind can get by with only Non-conceptual representations construed in this way.

Contemporary disagreement over Non-conceptual representation concerns the existence and nature of phenomenal properties and the role they play in determining the content of sensory experience. Dennett (1988), for example, denies that there are such things as qualia at all; while Brandom (2002), McDowell (1994), Rey (1991) and Sellars (1956) deny that they are needed to explain the content of sensory experience. Among those who accept that experiences have phenomenal content, some (Dretske, Lycan, Tye) argue that it is reducible to a kind of intentional content, while others (Block, Loar, Peacocke) argue that it is irreducible.

There has also been dissent from the traditional claim that conceptual representations (thoughts, beliefs) lack phenomenology. Chalmers (1996), Flanagan (1992), Goldman (1993), Horgan and Tiensen (2003), Jackendoff (1987), Levine (1993, 1995, 2001), McGinn (1991), Pitt (2004), Searle (1992), Siewert (1998) and Strawson (1994), claim that purely symbolic (conscious) representational states themselves have a (perhaps proprietary) phenomenology. If this claim is correct, the question of what role phenomenology plays in the determination of content reprises for conceptual representation; and the eliminativist ambitions of Sellars, Brandom, Rey, would meet a new obstacle. (It would also raise prima face problems for reductivist’s representationalism

The representationalist thesis is often formulated as the claim that phenomenal properties are representational or intentional. However, this formulation is ambiguous between a reductive and a non-deductive claim (though the term ‘representationalism’ is most often used for the reductive claim). On one hand, it could mean that the phenomenal content of an experience is a kind of intentional content (the properties it represents). On the other, it could mean that the (irreducible) phenomenal properties of an experience determine an intentional content. Representationalists such as Dretske, Lycan and Tye would assent to the former claim, whereas phenomenalists such as Block, Chalmers, Loar and Peacocke would assent to the latter. (Among phenomenalists, there is further disagreement about whether qualia are intrinsically representational (Loar) or not (Block, Peacocke).

Most (reductive) representationalists are motivated by the conviction that one or another naturalistic explanation of intentionality is, in broad outline, correct, and by the desire to complete the naturalization of the mental by applying such theories to the problem of phenomenality. (Needless to say, most phenomenalists (Chalmers is the major exception) are just as eager to naturalize the phenomenal - though not in the same way.)

The main argument for representationalism appeals to the transparency of experience (cf. Tye 2000: 45-51). The properties that characterize what it's like to have a perceptual experience are presented in experience as properties of objects perceived: in attending to an experience, one seems to ‘see through it’ to the objects and properties it is experiences of. They are not presented as properties of the experience itself. If nonetheless they were properties of the experience, perception would be massively deceptive. But perception is not massively deceptive. According to the representationalist, the phenomenal character of an experience is due to its representing objective, non-experiential properties. (In veridical perception, these properties are locally instantiated; in illusion and hallucination, they are not.) On this view, introspection is indirect perception: one comes to know what phenomenal features one's experience has by coming to know what objective features it represents.

In order to account for the intuitive differences between conceptual and sensory representations, representationalists appeal to their structural or functional differences. Dretske (1995), for example, distinguishes experiences and thoughts on the basis of the origin and nature of their functions: an experience of a property 'P' is a state of a system whose evolved function is to indicate the presence of 'P' in the environment; a thought representing the property 'P', on the other hand, is a state of a system whose assigned (learned) function is to calibrate the output of the experiential system. Rey (1991) takes both thoughts and experiences to be relations to sentences in the language of thought, and distinguishes them on the basis of (the functional roles of) such sentences' constituent predicates. Lycan (1987, 1996) distinguishes them in terms of their functional-computational profiles. Tye (2000) distinguishes them in terms of their functional roles and the intrinsic structure of their vehicles: thoughts are representations in a language-like medium, whereas experiences are image-like representations consisting of ‘symbol-filled arrays.’ (the account of mental images in Tye 1991.)

Phenomenalists tend to make use of the same sorts of features (function, intrinsic structure) in explaining some of the intuitive differences between thoughts and experiences; but they do not suppose that such features exhaust the differences between phenomenal and non-phenomenal representations. For the phenomenalism, it is the phenomenal properties of experiences - qualia themselves - that constitute the fundamental difference between experience and thought. Peacocke (1992), for example, develops the notion of a perceptual ‘scenario’ (an assignment of phenomenal properties to coordinates of a three-dimensional egocentric space), whose content is ‘correct’ (a semantic property) if in the corresponding ‘scene’ (the portion of the external world represented by the scenario) properties are distributed as their phenomenal analogues are in the scenario.

Another sort of representation championed by phenomenalists (e.g., Block, Chalmers (2003) and Loar (1996)) is the ‘phenomenal concept’ - a conceptual/phenomenal hybrid consisting of a phenomenological ‘sample’ (an image or an occurrent sensation) integrated with (or functioning as) a conceptual component. Phenomenal concepts are postulated to account for the apparent fact (among others) that, as McGinn (1991) puts it, ‘you cannot form [introspective] concepts of conscious properties unless you yourself instantiate those properties.’ One cannot have a phenomenal concept of a phenomenal property 'P', and, hence, phenomenal beliefs about P, without having experience of 'P', because 'P' itself is (in some way) constitutive of the concept of 'P'. (Jackson 1982, 1986 and Nagel 1974.)

Though imagery has played an important role in the history of philosophy of mind, the important contemporary literature on it is primarily psychological. In a series of psychological experiments done in the 1970s (summarized in Kosslyn 1980 and Shepard and Cooper 1982), subjects' response time in tasks involving mental manipulation and examination of presented figures was found to vary in proportion to the spatial properties (size, orientation, etc.) of the figures presented. The question of how these experimental results are to be explained has kindled a lively debate on the nature of imagery and imagination.

Kosslyn (1980) claims that the results suggest that the tasks were accomplished via the examination and manipulation of mental representations that themselves have spatial properties - i.e., pictorial representations, or images. Others, principally Pylyshyn (1979, 1981, 2003), argue that the empirical facts can be explained in terms exclusively of discursive, or propositional representations and cognitive processes defined over them. (Pylyshyn takes such representations to be sentences in a language of thought.)

The idea that pictorial representations are literally pictures in the head is not taken seriously by proponents of the pictorial view of imagery The claim is, rather, that mental images represent in a way that is relevantly like the way pictures represent. (Attention has been focussed on visual imagery - hence the designation ‘pictorial’; though of course there may imagery in other modalities - auditory, olfactory, etc. - as well.)

The distinction between pictorial and discursive representation can be characterized in terms of the distinction between analog and digital representation (Goodman 1976). This distinction has itself been variously understood (Fodor & Pylyshyn 1981, Goodman 1976, Haugeland 1981, Lewis 1971, McGinn 1989), though a widely accepted construal is that analog representation is continuous (i.e., in virtue of continuously variable properties of the representation), while digital representation is discrete (i.e., in virtue of properties a representation either has or doesn't have) (Dretske 1981). (An analog/digital distinction may also be made with respect to cognitive processes. (Block 1983.)) On this understanding of the analog/digital distinction, imagistic representations, which represent in virtue of properties that may vary continuously (such as being more or less bright, loud, vivid, etc.), would be analog, while conceptual representations, whose properties do not vary continuously (a thought cannot be more or less about Elvis: either it is or it is not) would be digital.

It might be supposed that the pictorial/discursive distinction is best made in terms of the phenomenal/nonphenomenal distinction, but it is not obvious that this is the case. For one thing, there may be nonphenomenal properties of representations that vary continuously. Moreover, there are ways of understanding pictorial representation that presuppose neither phenomenality nor analogicity. According to Kosslyn (1980, 1982, 1983), a mental representation is ‘quasi-pictorial’ when every part of the representation corresponds to a part of the object represented, and relative distances between parts of the object represented are preserved among the parts of the representation. But distances between parts of a representation can be defined functionally rather than spatially - for example, in terms of the number of discrete computational steps required to combine stored information about them. (Rey 1981.)

Tye (1991) proposes a view of images on which they are hybrid representations, consisting both of pictorial and discursive elements. On Tye's account, images are ‘(labelled) interpreted symbol-filled arrays.’ The symbols represent discursively, while their arrangement in arrays has representational significance (the location of each ‘cell’ in the array represents a specific viewer-centred 2-D location on the surface of the imagined object)

The contents of mental representations are typically taken to be abstract objects (properties, relations, propositions, sets, etc.). A pressing question, especially for the naturalist, is how mental representations come to have their contents. Here the issue is not how to naturalize content (abstract objects can't be naturalized), but, rather, how to provide a naturalistic account of the content-determining relations between mental representations and the abstract objects they express. There are two basic types of contemporary naturalistic theories of content-determination, causal-informational and functional.

Causal-informational theories (Dretske 1981, 1988, 1995) hold that the content of a mental representation is grounded in the information it carries about what does (Devitt 1996) or would (Fodor 1987, 1990) cause it to occur. There is, however, widespread agreement that causal-informational relations are not sufficient to determine the content of mental representations. Such relations are common, but representation is not. Tree trunks, smoke, thermostats and ringing telephones carry information about what they are causally related to, but they do not represent (in the relevant sense) what they carry information about. Further, a representation can be caused by something it does not represent, and can represent something that has not caused it.

The main attempts to specify what makes a causal-informational state a mental representation are Asymmetric Dependency Theories (e.g., Fodor 1987, 1990, 1994) and Teleological Theories (Fodor 1990, Millikan 1984, Papineau 1987, Dretske 1988, 1995). The Asymmetric Dependency Theory distinguishes merely informational relations from representational relations on the basis of their higher-order relations to each other: informational relations depend upon representational relations, but not vice-versa. For example, if tokens of a mental state type are reliably caused by horses, cows-on-dark-nights, zebras-in-the-mist and Great Danes, then they carry information about horses, etc. If, however, such tokens are caused by cows-on-dark-nights, etc. because they were caused by horses, but not vice versa, then they represent horses.

According to Teleological Theories, representational relations are those a representation-producing mechanism has the selected (by evolution or learning) function of establishing. For example, zebra-caused horse-representations do not mean zebra, because the mechanism by which such tokens are produced has the selected function of indicating horses, not zebras. The horse-representation-producing mechanism that responds to zebras is malfunctioning.

Functional theories (Block 1986, Harman 1973), hold that the content of a mental representation is grounded in its (causal computational, inferential) relations to other mental representations. They differ on whether relata should include all other mental representations or only some of them, and on whether to include external states of affairs. The view that the content of a mental representation is determined by its inferential/computational relations with all other representations is holism; the view it is determined by relations to only some other mental states is localism (or molecularism). (The view that the content of a mental state depends on none of its relations to other mental states is atomism.) Functional theories that recognize no content-determining external relata have been called solipsistic (Harman 1987). Some theorists posit distinct roles for internal and external connections, the former determining semantic properties analogous to sense, the latter determining semantic properties analogous to reference (McGinn 1982, Sterelny 1989)

(Reductive) representationalists (Dretske, Lycan, Tye) usually take one or another of these theories to provide an explanation of the (Non-conceptual) content of experiential states. They thus tend to be Externalists about phenomenological as well as conceptual content. Phenomenalists and non-deductive representationalists (Block, Chalmers, Loar, Peacocke, Siewert), on the other hand, take it that the representational content of such states is (at least in part) determined by their intrinsic phenomenal properties. Further, those who advocate a phenomenology-based approach to conceptual content (Horgan and Tiensen, Loar, Pitt, Searle, Siewert) also seem to be committed to internalist individuation of the content (if not the reference) of such states.

Generally, those who, like informational theorists, think relations to one's (natural or social) environment are (at least partially) determinative of the content of mental representations are Externalists (e.g., Burge 1979, 1986, McGinn 1977, Putnam 1975), whereas those who, like some proponents of functional theories, think representational content is determined by an individual's intrinsic properties alone, are internalists (or individualists; cf. Putnam 1975, Fodor 1981)

This issue is widely taken to be of central importance, since psychological explanation, whether commonsense or scientific, is supposed to be both causal and content-based. (Beliefs and desires cause the behaviours they do because they have the contents they do. For example, the desire that one have a beer and the beliefs that there is beer in the refrigerator and that the refrigerator is in the kitchen may explain one's getting up and going to the kitchen.) If, however, a mental representation's having a particular content is due to factors extrinsic to it, it is unclear how its having that content could determine its causal powers, which, arguably, must be intrinsic. Some who accept the standard arguments for externalism have argued that internal factors determine a component of the content of a mental representation. They say that mental representations have both ‘narrow’ content (determined by intrinsic factors) and ‘wide’ or ‘broad’ content (determined by narrow content plus extrinsic factors). (This distinction may be applied to the sub-personal representations of cognitive science as well as to those of commonsense psychology.

Narrow content has been variously construed. Putnam (1975), Fodor (1982)), and Block (1986), for example, seem to understand it as something like de dicto content (i.e., Frégean sense, or perhaps character, à la Kaplan 1989). On this construal, narrow content is context-independent and directly expressible. Fodor (1987) and Block (1986), however, have also characterized narrow content as radically inexpressible. On this construal, narrow content is a kind of proto-content, or content-determinant, and can be specified only indirectly, via specifications of context/wide-content pairings. On both construal, narrow contents are characterized as functions from context to (wide) content. The narrow content of a representation is determined by properties intrinsic to it or its possessor such as its syntactic structure or its intra-mental computational or inferential role (or its phenomenology.

Burge (1986) has argued that causation-based worries about externalist individuation of psychological content, and the introduction of the narrow notion, are misguided. Fodor (1994, 1998) has more recently urged that a scientific psychology might not need narrow content in order to supply naturalistic (causal) explanations of human cognition and action, since the sorts of cases they were introduced to handle, viz., Twin-Earth cases and Frége cases, are either nomologically impossible or dismissible as exceptions to non-strict psychological laws.

The leading contemporary version of the Representational Theory of Mind, the Computational Theory of Mind (CTM), claims that the brain is a kind of computer and that mental processes are computations. According to the computational theory of mind, cognitive states are constituted by computational relations to mental representations of various kinds, and cognitive processes are sequences of such states. The computational theory of mind and the representational theory of mind, may by attempting to explain all psychological states and processes in terms of mental representation. In the course of constructing detailed empirical theories of human and animal cognition and developing models of cognitive processes implementable in artificial information processing systems, cognitive scientists have proposed a variety of types of mental representations. While some of these may be suited to be mental relata of commonsense psychological states, some - so-called ‘subpersonal’ or ‘sub-doxastic’ representations - are not. Though many philosophers believe that computational theory of mind can provide the best scientific explanations of cognition and behaviour, there is disagreement over whether such explanations will vindicate the commonsense psychological explanations of prescientific representational theory of mind.

According to Stich's (1983) Syntactic Theory of Mind, for example, computational theories of psychological states should concern themselves only with the formal properties of the objects those states are relations to. Commitment to the explanatory relevance of content, however, is for most cognitive scientists fundamental (Fodor 1981, Pylyshyn 1984, Von Eckardt 1993). That mental processes are computations, which computations are rule-governed sequences of semantically evaluable objects, and that the rules apply to the symbols in virtue of their content, are central tenets of mainstream cognitive science.

Explanations in cognitive science appeal to a many different kinds of mental representation, including, for example, the ‘mental models’ of Johnson-Laird 1983, the ‘retinal arrays,’ ‘primal sketches’ and ‘2½ -D sketches’ of Marr 1982, the ‘frames’ of Minsky 1974, the ‘sub-symbolic’ structures of Smolensky 1989, the ‘quasi-pictures’ of Kosslyn 1980, and the ‘interpreted symbol-filled arrays’ of Tye 1991 - in addition to representations that may be appropriate to the explanation of commonsense psychological states. Computational explanations have been offered of, among other mental phenomena, belief (Fodor 1975, Field 1978), visual perception (Marr 1982, Osherson, et al. 1990), rationality (Newell and Simon 1972, Fodor 1975, Johnson-Laird and Wason 1977), language learning and (Chomsky 1965, Pinker 1989), and musical comprehension (Lerdahl and Jackendoff 1983).

A fundamental disagreement among proponents of computational theory of mind concerns the realization of personal-level representations (e.g., thoughts) and processes (e.g., inferences) in the brain. The central debate here is between proponents of Classical Architectures and proponents of Conceptionist Architectures.

The classicists (e.g., Turing 1950, Fodor 1975, Fodor and Pylyshyn 1988, Marr 1982, Newell and Simon 1976) hold that mental representations are symbolic structures, which typically have semantically evaluable constituents, and that mental processes are rule-governed manipulations of them that are sensitive to their constituent structure. The connectionists (e.g., McCulloch & Pitts 1943, Rumelhart 1989, Rumelhart and McClelland 1986, Smolensky 1988) hold that mental representations are realized by patterns of activation in a network of simple processors (‘nodes’) and that mental processes consist of the spreading activation of such patterns. The nodes themselves are, typically, not taken to be semantically evaluable; nor do the patterns have semantically evaluable constituents. (Though there are versions of Connectionism - ‘localist’ versions - on which individual nodes are taken to have semantic properties (e.g., Ballard 1986, Ballard & Hayes 1984).) It is arguable, however, that localist theories are neither definitive nor representative of the Conceptionist program (Smolensky 1988, 1991, Chalmers 1993).

Classicists are motivated (in part) by properties thought seems to share with language. Fodor's Language of Thought Hypothesis (LOTH) (Fodor 1975, 1987), according to which the system of mental symbols constituting the neural basis of thought is structured like a language, provides a well-worked-out version of the classical approach as applied to commonsense psychology. According to the language of thought hypothesis, the potential infinity of complex representational mental states is generated from a finite stock of primitive representational states, in accordance with recursive formation rules. This combinatorial structure accounts for the properties of productivity and systematicity of the system of mental representations. As in the case of symbolic languages, including natural languages (though Fodor does not suppose either that the language of thought hypothesis explains only linguistic capacities or that only verbal creatures have this sort of cognitive architecture), these properties of thought are explained by appeal to the content of the representational units and their combinability into contentful complexes. That is, the semantics of both language and thought is compositional: the content of a complex representation is determined by the contents of its constituents and their structural configuration.

Connectionists are motivated mainly by a consideration of the architecture of the brain, which apparently consists of layered networks of interconnected neurons. They argue that this sort of architecture is unsuited to carrying out classical serial computations. For one thing, processing in the brain is typically massively parallel. In addition, the elements whose manipulation drives computation in Conceptionist networks (principally, the connections between nodes) are neither semantically compositional nor semantically evaluable, as they are on the classical approach. This contrast with classical computationalism is often characterized by saying that representation is, with respect to computation, distributed as opposed to local: representation is local if it is computationally basic; and distributed if it is not. (Another way of putting this is to say that for classicists mental representations are computationally atomic, whereas for connectionists they are not.)

Moreover, connectionists argue that information processing as it occurs in Conceptionist networks more closely resembles some features of actual human cognitive functioning. For example, whereas on the classical view learning involves something like hypothesis formation and testing (Fodor 1981), on the Conceptionist model it is a matter of evolving distribution of ‘weight’ (strength) on the connections between nodes, and typically does not involve the formulation of hypotheses regarding the identity conditions for the objects of knowledge. The Conceptionist network is ‘trained up’ by repeated exposure to the objects it is to learn to distinguish; and, though networks typically require many more exposures to the objects than do humans, this seems to model at least one feature of this type of human learning quite well.

Further, degradation in the performance of such networks in response to damage is gradual, not sudden as in the case of a classical information processor, and hence more accurately models the loss of human cognitive function as it typically occurs in response to brain damage. It is also sometimes claimed that Conceptionist systems show the kind of flexibility in response to novel situations typical of human cognition - situations in which classical systems are relatively ‘brittle’ or ‘fragile.’

Some philosophers have maintained that Connectionism entails that there are no propositional attitudes. Ramsey, Stich and Garon (1990) have argued that if Conceptionist models of cognition are basically correct, then there are no discrete representational states as conceived in ordinary commonsense psychology and classical cognitive science. Others, however (e.g., Smolensky 1989), hold that certain types of higher-level patterns of activity in a neural network may be roughly identified with the representational states of commonsense psychology. Still others (e.g., Fodor & Pylyshyn 1988, Heil 1991, Horgan and Tienson 1996) argue that language-of-thought style representation is both necessary in general and realizable within Conceptionist architectures. (MacDonald & MacDonald 1995 collects the central contemporary papers in the classicist/Conceptionist debate, and provides useful introductory material as well.

Whereas Stich (1983) accepts that mental processes are computational, but denies that computations are sequences of mental representations, others accept the notion of mental representation, but deny that computational theory of mind provides the correct account of mental states and processes.

Van Gelder (1995) denies that psychological processes are computational. He argues that cognitive systems are dynamic, and that cognitive states are not relations to mental symbols, but quantifiable states of a complex system consisting of (in the case of human beings) a nervous system, a body and the environment in which they are embedded. Cognitive processes are not rule-governed sequences of discrete symbolic states, but continuous, evolving total states of dynamic systems determined by continuous, simultaneous and mutually determining states of the systems' components. Representation in a dynamic system is essentially information-theoretic, though the bearers of information are not symbols, but state variables or parameters.

Horst (1996), on the other hand, argues that though computational models may be useful in scientific psychology, they are of no help in achieving a philosophical understanding of the intentionality of commonsense mental states. computational theory of mind attempts to reduce the intentionality of such states to the intentionality of the mental symbols they are relations to. But, Horst claims, the relevant notion of symbolic content is essentially bound up with the notions of convention and intention. So the computational theory of mind involves itself in a vicious circularity: the very properties that are supposed to be reduced are (tacitly) appealed to in the reduction.

To say that a mental object has semantic properties is, paradigmatically, to say that it may be about, or be true or false of, an object or objects, or that it may be true or false simpliciter. Suppose I think that ocelots take snuff. I am thinking about ocelots, and if what I think of them (that they take snuff) is true of them, then my thought is true. According to representational theory of mind such states are to be explained as relations between agents and mental representations. To think that ocelots take snuff is to token in some way a mental representation whose content is that ocelots take snuff. On this view, the semantic properties of mental states are the semantic properties of the representations they are relations to.

Linguistic acts seem to share such properties with mental states. Suppose I say that ocelots take snuff. I am talking about ocelots, and if what I say of them (that they take snuff) is true of them, then my utterance is true. Now, to say that ocelots take snuff is (in part) to utter a sentence that means that ocelots take snuff. Many philosophers have thought that the semantic properties of linguistic expressions are inherited from the intentional mental states they are conventionally used to express (Grice 1957, Fodor 1978, Schiffer1972/1988, Searle 1983). On this view, the semantic properties of linguistic expressions are the semantic properties of the representations that are the mental relata of the states they are conventionally used to express.

It is also widely held that in addition to having such properties as reference, truth-conditions and truth - so-called extensional properties - expressions of natural languages also have intensional properties, in virtue of expressing properties or propositions - i.e., in virtue of having meanings or senses, where two expressions may have the same reference, truth-conditions or truth value, yet express different properties or propositions (Frége 1892/1997). If the semantic properties of natural-language expressions are inherited from the thoughts and concepts they express (or vice versa, or both), then an analogous distinction may be appropriate for mental representations.

Søren Aabye Kierkegaard (1813-1855), a Danish religious philosopher, whose concern with individual existence, choice, and commitment profoundly influenced modern theology and philosophy, especially existentialism.

Søren Kierkegaard wrote of the paradoxes of Christianity and the faith required to reconcile them. In his book Fear and Trembling, Kierkegaard discusses Genesis 22, in which God commands Abraham to kill his only son, Isaac. Although God made an unreasonable and immoral demand, Abraham obeyed without trying to understand or justify it. Kierkegaard regards this ‘leap of faith’ as the essence of Christianity.

Kierkegaard was born in Copenhagen on May 15, 1813. His father was a wealthy merchant and strict Lutheran, whose gloomy, guilt-ridden piety and vivid imagination strongly influenced Kierkegaard. Kierkegaard studied theology and philosophy at the University of Copenhagen, where he encountered Hegelian philosophy and reacted strongly against it. While at the university, he ceased to practice Lutheranism and for a time led an extravagant social life, becoming a familiar figure in the theatrical and café society of Copenhagen. After his father's death in 1838, however, he decided to resume his theological studies. In 1840 he became engaged to the 17-year-old Regine Olson, but almost immediately he began to suspect that marriage was incompatible with his own brooding, complicated nature and his growing sense of a philosophical vocation. He abruptly broke off the engagement in 1841, but the episode took on great significance for him, and he repeatedly alluded to it in his books. At the same time, he realized that he did not want to become a Lutheran pastor. An inheritance from his father allowed him to devote himself entirely to writing, and in the remaining 14 years of his life he produced more than 20 books.

Kierkegaard's work is deliberately unsystematic and consists of essays, aphorisms, parables, fictional letters and diaries, and other literary forms. Many of his works were originally published under pseudonyms. He applied the term existential to his philosophy because he regarded philosophy as the expression of an intensely examined individual life, not as the construction of a monolithic system in the manner of the 19th-century German philosopher Georg Wilhelm Friedrich Hegel, whose work he attacked in Concluding Unscientific Postscript (1846; trans. 1941). Hegel claimed to have achieved a complete rational understanding of human life and history; Kierkegaard, on the other hand, stressed the ambiguity and paradoxical nature of the human situation. The fundamental problems of life, he contended, defy rational, objective explanation; the highest truth is subjective.

Kierkegaard maintained that systematic philosophy not only imposes a false perspective on human existence but that it also, by explaining life in terms of logical necessity, becomes a means of avoiding choice and responsibility. Individuals, he believed, create their own natures through their choices, which must be made in the absence of universal, objective standards. The validity of a choice can only be determined subjectively.

In his first major work, Either/Or (2 volumes, 1843; trans. 1944), Kierkegaard described two spheres, or stages of existence, that the individual may choose: the aesthetic and the ethical. The aesthetic way of life is a refined hedonism, consisting of a search for pleasure and a cultivation of mood. The aesthetic individual constantly seeks variety and novelty in an effort to stave off boredom but eventually must confront boredom and despair. The ethical way of life involves an intense, passionate commitment to duty, to unconditional social and religious obligations. In his later works, such as Stages on Life's Way (1845; trans. 1940), Kierkegaard discerned in this submission to duty a loss of individual responsibility, and he proposed a third stage, the religious, in which one submits to the will of God but in doing so finds authentic freedom. In Fear and Trembling (1846; trans. 1941) Kierkegaard focused on God's command that Abraham sacrifice his son Isaac (Genesis 22: 1-19), an act that violates Abraham's ethical convictions. Abraham proves his faith by resolutely setting out to obey God's command, even though he cannot understand it. This ‘suspension of the ethical,’ as Kierkegaard called it, allows Abraham to achieve an authentic commitment to God. To avoid ultimate despair, the individual must make a similar ‘leap of faith’ into a religious life, which is inherently paradoxical, mysterious, and full of risk. One is called to it by the feeling of dread (The Concept of Dread,1844; trans. 1944), which is ultimately a fear of nothingness.

Toward the end of his life Kierkegaard was involved in bitter controversies, especially with the established Danish Lutheran church, which he regarded as worldly and corrupt. His later works, such as The Sickness Unto Death (1849; trans. 1941), reflect an increasingly somber view of Christianity, emphasizing suffering as the essence of authentic faith. He also intensified his attack on modern European society, which he denounced in The Present Age (1846; trans. 1940) for its lack of passion and for its quantitative values. The stress of his prolific writing and of the controversies in which he engaged gradually undermined his health; in October 1855 he fainted in the street, and he died in Copenhagen on November 11, 1855.

Kierkegaard's influence was at first confined to Scandinavia and to German-speaking Europe, where his work had a strong impact on Protestant theology and on such writers as the 20th-century Austrian novelist Franz Kafka. As existentialism emerged as a general European movement after World War I, Kierkegaard's work was widely translated, and he was recognized as one of the seminal figures of modern culture.

Since scientists, during the nineteenth century were engrossed with uncovering the workings of external reality and seemingly knew of themselves that these virtually overflowing burdens of nothing, in that were about the physical substrates of human consciousness, the business of examining the distributive contribution in dynamic functionality and structural foundation of mind became the province of social scientists and humanists. Adolphe Quételet proposed a ‘social physics’ that could serve as the basis for a new discipline called sociology, and his contemporary Auguste Comte concluded that a true scientific understanding of the social reality was quite inevitable. Mind, in the view of these figures, was a separate and distinct mechanism subject to the lawful workings of a mechanical social reality.

More formal European philosophers, such as Immanuel Kant, sought to reconcile representations of external reality in mind with the motions of matter-based on the dictates of pure reason. This impulse was also apparent in the utilitarian ethics of Jerry Bentham and John Stuart Mill, in the historical materialism of Karl Marx and Friedrich Engels, and in the pragmatism of Charles Smith, William James and John Dewey. These thinkers were painfully aware, however, of the inability of reason to posit a self-consistent basis for bridging the gap between mind and matter, and each remains obliged to conclude that the realm of the mental exists only in the subjective reality of the individual.

The fatal flaw of pure reason is, of course, the absence of emotion, and purely explanations of the division between subjective reality and external reality, of which had limited appeal outside the community of intellectuals. The figure most responsible for infusing our understanding of the Cartesian dualism with contextual representation of our understanding with emotional content was the death of God theologian Friedrich Nietzsche 1844-1900. After declaring that God and ‘divine will’, did not exist, Nietzsche reified the ‘existence’ of consciousness in the domain of subjectivity as the ground for individual ‘will’ and summarily reducing all previous philosophical attempts to articulate the ‘will to truth’. The dilemma, forth in, had seemed to mean, by the validation, . . . as accredited for doing of science, in that the claim that Nietzsche’s earlier versions to the ‘will to truth’, disguises the fact that all alleged truths were arbitrarily created in the subjective reality of the individual and are expressed or manifesting the individualism of ‘will’.

In Nietzsche’s view, the separation between mind and matter is more absolute and total than previously been imagined. Based on the assumption that there is no really necessary correspondence between linguistic constructions of reality in human subjectivity and external reality, he deuced that we are all locked in ‘a prison house of language’. The prison as he concluded it, was also a ‘space’ where the philosopher can examine the ‘innermost desires of his nature’ and articulate a new message of individual existence founded on ‘will’.

Those who fail to enact their existence in this space, Nietzsche says, are enticed into sacrificing their individuality on the nonexistent altars of religious beliefs and democratic or socialists’ ideals and become, therefore, members of the anonymous and docile crowd. Nietzsche also invalidated the knowledge claims of science in the examination of human subjectivity. Science, he said. Is not exclusive to natural phenomenons and favors reductionistic examination of phenomena at the expense of mind? It also seeks to reduce the separateness and uniqueness of mind with mechanistic descriptions that disallow and basis for the free exercise of individual will.

Nietzsche’s emotionally charged defense of intellectual freedom and radial empowerment of mind as the maker and transformer of the collective fictions that shape human reality in a soulless mechanistic universe proved terribly influential on twentieth-century thought. Furthermore, Nietzsche sought to reinforce his view of the subjective character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. Through a curious course of events, attempted by Edmund Husserl 1859-1938, a German mathematician and a principal founder of phenomenology, wherefor to resolve this crisis resulted in a view of the character of consciousness that closely resembled that of Nietzsche.

The best-known disciple of Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existentialist Jean-Paul Sartre. The work of Husserl, Heidegger, and Sartre became foundational to that of the principal architects of philosophical postmodernism, and deconstructionist Jacques Lacan, Roland Barthes, Michel Foucault and Jacques Derrida. It obvious attribution of a direct linkage between the nineteenth-century crisis about the epistemological foundations of mathematical physics and the origin of philosophical postmodernism served to perpetuate the Cartesian two-world dilemma in an even more oppressive form. It also allows us better to understand the origins of cultural ambience and the ways in which they could resolve that conflict.

The mechanistic paradigms of the late in the nineteenth century was the one Einstein came to know when he studied physics. Most physicists believed that it represented an eternal truth, but Einstein was open to fresh ideas. Inspired by Mach’s critical mind, he demolished the Newtonian ideas of space and time and replaced them with new, ‘relativistic’ notions.

Jean-Paul Sartre (1905-1980), was a French philosopher, dramatist, novelist, and political journalist, who was a leading exponent of existentialism. Jean-Paul Sartre helped to develop existential philosophy through his writings, novels, and plays. Much of Sartre’s work focuses on the dilemma of choice faced by free individuals and on the challenge of creating meaning by acting responsibly in an indifferent world. In stating that ‘man is condemned to be free,’ Sartre reminds us of the responsibility that accompanies human decisions.

Sartre was born in Paris, June 21, 1905, and educated at the Écôle Normale Supérieure in Paris, the University of Fribourg in Switzerland, and the French Institute in Berlin. He taught philosophy at various lycées from 1929 until the outbreak of World War II, when he was called into military service. In 1940-41 he was imprisoned by the Germans; after his release, he taught in Neuilly, France, and later in Paris, and was active in the French Resistance. The German authorities, unaware of his underground activities, permitted the production of his antiauthoritarian play The Flies (1943; trans. 1946) and the publication of his major philosophic work Being and Nothingness (1943; trans. 1953). Sartre gave up teaching in 1945 and founded the political and literary magazine Les Temps Modernes, of which he became editor in chief. Sartre was active after 1947 as an independent Socialist, critical of both the USSR and the United States in the so-called cold war years. Later, he supported Soviet positions but still frequently criticized Soviet policies. Most of his writing of the 1950s deals with literary and political problems. Sartre rejected the 1964 Nobel Prize in literature, explaining that to accept such an award would compromise his integrity as a writer.

Sartre's philosophic works combine the phenomenology of the German philosopher Edmund Husserl, the metaphysics of the German philosophers Georg Wilhelm Friedrich Hegel and Martin Heidegger, and the social theory of Karl Marx into a single view called existentialism. This view, which relates philosophical theory to life, literature, psychology, and political action, stimulated so much popular interest that existentialism became a worldwide movement.

In his early philosophic work, Being and Nothingness, Sartre conceived humans as beings who create their own world by rebelling against authority and by accepting personal responsibility for their actions, unaided by society, traditional morality, or religious faith. Distinguishing between human existence and the nonhuman world, he maintained that human existence is characterized by nothingness, that is, by the capacity to negate and rebel. His theory of existential psychoanalysis asserted the inescapable responsibility of all individuals for their own decisions and made the recognition of one's absolute freedom of choice the necessary condition for authentic human existence. His plays and novels express the belief that freedom and acceptance of personal responsibility are the main values in life and that individuals must rely on their creative powers rather than on social or religious authority.

In his later philosophic work Critique of Dialectical Reason (1960; trans. 1976), Sartre's emphasis shifted from existentialist freedom and subjectivity to Marxist social determinism. Sartre argued that the influence of modern society over the individual is so great as to produce serialization, by which he meant loss of self. Individual power and freedom can only be regained through group revolutionary action. Despite this exhortation to revolutionary political activity, Sartre himself did not join the Communist Party, thus retaining the freedom to criticize the Soviet invasions of Hungary in 1956 and Czechoslovakia in 1968. He died in Paris, April 15, 1980.

The part of the theory of design or semiotics, that concerns the relationship between speakers and their signs. the study of the principles governing appropriate conversational moves is called general pragmatics, applied pragmatics treats of special kinds of linguistic interactions such as interviews and speech asking, nevertheless, the philosophical movement that has had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notion that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in know-how and practicality and an equally American distrust of abstract theories and ideologies.

Pragmatists regard all theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.

Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behavior. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.

The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatism’s refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather that these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists’ denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.

Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested to many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.

The three most important pragmatists are American philosophers Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; his objective was to infuse scientific thinking into philosophy and society, and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning - in particular, the meaning of concepts used in science. The meaning of the concept ‘brittle,’ for example, is given by the observed consequences or properties that objects called ‘brittle’ exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivists, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivists emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.

James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce’s doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic, is crucial to rationality and that the great issues of life - morality and religious belief, for example - are leaps of faith. As such, they depend upon what he called ‘the will to believe’ and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist - someone who believes the world to be far too complex for any one philosophy to explain everything.

Dewey’s philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world-view in which individuals and society are progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything people know and do depends on a historical context and is thus tentative rather than absolute.

Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey’s writings, although he aspired to synthesize the two realms.

The pragmatist tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists - Pierce, James, and Dewey - as an alternative to Rorty’s interpretation of the tradition.

In an ever changing world, pragmatism has many benefits. It defends social experimentation as a means of improving society, accepts pluralism, and rejects dead dogmas. But a philosophy that offers no final answers or absolutes and that appears vague as a result of trying to harmonize opposites may also be unsatisfactory to some.

One of the five branches into which semiotics is usually divided the study of meaning of words, and their relation of designed to the object studied, a semantic is provided for a formal language when an interpretation or model is specified. Nonetheless, the Semantics, the Greek semanticist, ‘significant,’ the study of the meaning of linguistic signs - that is, words, expressions, and sentences. Scholars of semantics try to one answer such questions as ‘What is the meaning of (the word) X?’ They do this by studying what signs are, as well as how signs possess significance - that is, how they are intended by speakers, how they designate (make reference to things and ideas), and how they are interpreted by hearers. The goal of semantics is to match the meanings of signs - what they stand for - with the process of assigning those meanings.

Semantics is studied from philosophical (pure) and linguistic (descriptive and theoretical) approaches, plus an approach known as general semantics. Philosophers look at the behavior that goes with the process of meaning. Linguists study the elements or features of meaning as they are related in a linguistic system. General semanticists concentrate on meaning as influencing what people think and do.

These semantic approaches also have broader application. Anthropologists, through descriptive semantics, study what people categorize as culturally important. Psychologists draw on theoretical semantic studies that attempt to describe the mental process of understanding and to identify how people acquire meaning (as well as sound and structure) in language. Animal behaviorists research how and what other species communicate. Exponents of general semantics examine the different values (or connotations) of signs that supposedly mean the same thing (such as ‘the victor at Jena’ and ‘the loser at Waterloo,’ both referring to Napoleon). Also in a general-semantics vein, literary critics have been influenced by studies differentiating literary language from ordinary language and describing how literary metaphors evoke feelings and attitudes.

In the late 19th century Michel Jules Alfred Breal, a French philologist, proposed a ‘science of significations’ that would investigate how sense is attached to expressions and other signs. In 1910 the British philosophers Alfred North Whitehead and Bertrand Russell published Principia Mathematica, which strongly influenced the Vienna Circle, a group of philosophers who developed the rigorous philosophical approach known as logical positivism.

One of the leading figures of the Vienna Circle, the German philosopher Rudolf Carnap, made a major contribution to philosophical semantics by developing symbolic logic, a system for analyzing signs and what they designate. In logical positivism, meaning is a relationship between words and things, and its study is empirically based: Because language, ideally, is a direct reflection of reality, signs match things and facts. In symbolic logic, however, mathematical notation is used to state what signs designate and to do so more clearly and precisely than is possible in ordinary language. Symbolic logic is thus itself a language, specifically, a metalanguage (formal technical language) used to talk about an object language (the language that is the object of a given semantic study).

An object language has a speaker (for example, a French woman) using expressions (such as la plume rouge) to designate a meaning (in this case, to indicate a definite pen - plume - of the color red - rouge). The full description of an object language in symbols is called the semiotic of that language. A language's semiotic has the following aspects: (1) a semantic aspect, in which signs (words, expressions, sentences) are given specific designations; (2) a pragmatic aspect, in which the contextual relations between speakers and signs are indicated; and (3) a syntactic aspect, in which formal relations among the elements within signs (for example, among the sounds in a sentence) are indicated.

An interpreted language in symbolic logic is an object language together with rules of meaning that link signs and designations. Each interpreted sign has a truth condition - a condition that must be met in order for the sign to be true. A sign's meaning is what the sign designates when its truth condition is satisfied. For example, the expression or sign ‘the moon is a sphere’ is understood by someone who knows English; however, although it is understood, it may or may not be true. The expression is true if the thing it is extended to - the moon - is in fact spherical. To determine the sign's truth value, one must look at the moon for oneself.

The symbolic logic of logical positivist philosophy thus represents an attempt to get at meaning by way of the empirical verifiability of signs - by whether the truth of the sign can be confirmed by observing something in the real world. This attempt at understanding meaning has been only moderately successful. The Austrian-British philosopher Ludwig Wittgenstein rejected it in favor of his ‘ordinary language’ philosophy, in which he asserted that thought is based on everyday language. Not all signs designate things in the world, he pointed out, nor can all signs be associated with truth values. In his approach to philosophical semantics, the rules of meaning are disclosed in how speech is used.

From ordinary-language philosophy has evolved the current theory of speech-act semantics. The British philosopher J. L. Austin claimed that, by speaking, a person performs an act, or does something (such as state, predict, or warn), and that meaning is found in what an expression does, in the act it performs. The American philosopher John R. Searle extended Austin's ideas, emphasizing the need to relate the functions of signs or expressions to their social context. Searle asserted that speech encompasses at least three kinds of acts: (1) locutionary acts, in which things are said with a certain sense or reference (as in ‘the moon is a sphere’); (2) illocutionary acts, in which such acts as promising or commanding are performed by means of speaking; and (3) perlocutionary acts, in which the speaker, by speaking, does something to someone else (for example, angers, consoles, or persuades someone). The speaker's intentions are conveyed by the illocutionary force that is given to the signs - that is, by the actions implicit in what is said. To be successfully meant, however, the signs must also be appropriate, sincere, consistent with the speaker's general beliefs and conduct, and recognizable as meaningful by the hearer.

What has developed in philosophical semantics, then, is a distinction between truth-based semantics and speech-act semantics. Some critics of speech-act theory believe that it deals primarily with meaning in communication (as opposed to meaning in language) and thus is part of the pragmatic aspect of a language's semiotic - that it relates to signs and to the knowledge of the world shared by speakers and hearers, rather than relating to signs and their designations (semantic aspect) or to formal relations among signs (syntactic aspect). These scholars hold that semantics should be restricted to assigning interpretations to signs alone - independent of a speaker and hearer.

Researchers in descriptive semantics examine what signs mean in particular languages. They aim, for instance, to identify what constitutes nouns or noun phrases and verbs or verb phrases. For some languages, such as English, this is done with subject-predicate analysis. For languages without clear-cut distinctions between nouns, verbs, and prepositions, it is possible to say what the signs mean by analyzing the structure of what are called propositions. In such an analysis, a sign is seen as an operator that combines with one or more arguments (also signs), often nominal arguments (noun phrases) or, relates nominal arguments to other elements in the expression (such as prepositional phrases or adverbial phrases). For example, in the expression ‘Bill gives Mary the book,’‘gives’ is an operator that relates the arguments ‘Bill,’‘Mary,’ and ‘the book.’

Whether using subject-predicate analysis or propositional analysis, descriptive semanticists establish expression classes (classes of items that can substitute for one another within a sign) and classes of items within the conventional parts of speech (such as nouns and verbs). The resulting classes are thus defined in terms of syntax, and they also have semantic roles; that is, the items in these classes perform specific grammatical functions, and in so doing they establish meaning by predicating, referring, making distinctions among entities, relations, or actions. For example, ‘kiss’ belongs to an expression class with other items such as ‘hit’ and ‘see,’ as well as to the conventional part of speech ‘verb,’ in which it is part of a subclass of operators requiring two arguments (an actor and a receiver). In ‘Mary kissed John,’ the syntactic role of ‘kiss’ is to relate two nominal arguments (‘Mary’ and ‘John’), whereas its semantic role is to identify a type of action. Unfortunately for descriptive semantics, however, it is not always possible to find a one-to-one correlation of syntactic classes with semantic roles. For instance, ‘John’ has the same semantic role - to identify a person - in the following two sentences: ‘John is easy to please’ and ‘John is eager to please.’ The syntactic role of ‘John’ in the two sentences, however, is different: In the first, ‘John’ is the receiver of an action; in the second, ‘John’ is the actor.

Linguistic semantics is also used by anthropologists called ethnoscientists to conduct formal semantic analysis (componential analysis) to determine how expressed signs - usually single words as vocabulary items called lexemes - in a language are related to the perceptions and thoughts of the people who speak the language. Componential analysis tests the idea that linguistic categories influence or determine how people view the world; this idea is called the Whorf hypothesis after the American anthropological linguist Benjamin Lee Whorf, who proposed it. In componential analysis, lexemes that have a common range of meaning constitute a semantic domain. Such a domain is characterized by the distinctive semantic features (components) that differentiate individual lexemes in the domain from one another, and also by features shared by all the lexemes in the domain. Such componential analysis points out, for example, that in the domain ‘seat’ in English, the lexemes ‘chair,’‘sofa,’‘loveseat,’ and ‘bench’ can be distinguished from one another according to how many people are accommodated and whether a back support is included. At the same time all these lexemes share the common component, or feature, of meaning ‘something on which to sit.’

Linguists pursuing such componential analysis hope to identify a universal set of such semantic features, from which are drawn the different sets of features that characterize different languages. This idea of universal semantic features has been applied to the analysis of systems of myth and kinship in various cultures by the French anthropologist Claude Lévi-Strauss. He showed that people organize their societies and interpret their place in these societies in ways that, despite apparent differences, have remarkable underlying similarities.

Linguists concerned with theoretical semantics are looking for a general theory of meaning in language. To such linguists, known as transformational-generative grammarians, meaning is part of the linguistic knowledge or competence that all humans possess. A generative grammar as a model of linguistic competence has a phonological (sound-system), a syntactic, and a semantic component. The semantic component, as part of a generative theory of meaning, is envisioned as a system of rules that govern how interpretable signs are interpreted and determine that other signs (such as ‘Colorless green ideas sleep furiously’), although grammatical expressions, are meaningless - semantically blocked. The rules must also account for how a sentence such as ‘They passed the port at midnight’ can have at least two interpretations.

Generative semantics grew out of proposals to explain a speaker's ability to produce and understand new expressions where grammar or syntax fails. Its goal is to explain why and how, for example, a person understands at first hearing that the sentence ‘Colorless green ideas sleep furiously’ has no meaning, even though it follows the rules of English grammar; or how, in hearing a sentence with two possible interpretations (such as ‘They passed the port at midnight’), one decides which meaning applies.

In generative semantics, the idea developed that all information needed to semantically interpret a sign (usually a sentence) is contained in the sentence's underlying grammatical or syntactic deep structure. The deep structure of a sentence involves lexemes (understood as words or vocabulary items composed of bundles of semantic features selected from the proposed universal set of semantic features). On the sentence's surface (that is, when it is spoken) these lexemes will appear as nouns, verbs, adjectives, and other parts of speech - that is, as vocabulary items. When the sentence is formulated by the speaker, semantic roles (such as subject, object, predicate) are assigned to the lexemes; the listener hears the spoken sentence and interprets the semantic features that are meant.

Whether deep structure and semantic interpretation are distinct from one another is a matter of controversy. Most generative linguists agree, however, that a grammar should generate the set of semantically well-formed expressions that are possible in a given language, and that the grammar should associate a semantic interpretation with each expression.

Another subject of debate is whether semantic interpretation should be understood as syntactically based (that is, coming from a sentence's deep structure); or whether it should be seen as semantically based. According to Noam Chomsky, an American scholar who is particularly influential in this field, it is possible - in a syntactically based theory - for surface structure and deep structure jointly to determine the semantic interpretation of an expression.

The focus of general semantics is how people evaluate words and how that evaluation influences their behavior. Begun by the Polish American linguist Alfred Korzybski and long associated with the American semanticist and politician S. I. Hayakawa, general semantics has been used in efforts to make people aware of dangers inherent in treating words as more than symbols. It has been extremely popular with writers who use language to influence people's ideas. In their work, these writers use general-semantics guidelines for avoiding loose generalizations, rigid attitudes, inappropriate finality, and imprecision. Some philosophers and linguists, however, have criticized general semantics as lacking scientific rigor, and the approach has declined in popularity.

Positivism, system of philosophy based on experience and empirical knowledge of natural phenomena, in which metaphysics and theology are regarded as inadequate and imperfect systems of knowledge. The doctrine was first called positivism by the 19th-century French mathematician and philosopher Auguste Comte (1798-1857), but some of the positivist concepts may be traced to the British philosopher David Hume, the French philosopher Duc de Saint-Simon, and the German philosopher Immanuel Kant.

Comte chose the word positivism on the ground that it indicated the ‘reality’ and ‘constructive tendency’ that he claimed for the theoretical aspect of the doctrine. He was, in the main, interested in a reorganization of social life for the good of humanity through scientific knowledge, and thus control of natural forces. The two primary components of positivism, the philosophy and the polity (or program of individual and social conduct), were later welded by Comte into a whole under the conception of a religion, in which humanity was the object of worship. A number of Comte's disciples refused, however, to accept this religious development of his philosophy, because it seemed to contradict the original positivist philosophy. Many of Comte's doctrines were later adapted and developed by the British social philosophers John Stuart Mill and Herbert Spencer and by the Austrian philosopher and physicist Ernst Mach.

By comparison, the moral philosopher and epistemologist Bernard Bolzano (1781-1848) argues, though, that there is something else, an infinity that doe not have this whatever you need it to be elasticity. In fact a truly infinite quantity (for example, the length of a straight ligne unbounded in either direction, meaning: The magnitude of the spatial entity containing all the points determined solely by their abstractly conceivable relation to two fixed points) does not by any means need to be variable, and in adduced example it is in fact not variable. Conversely, it is quite possible for a quantity merely capable of being taken greater than we have already taken it, and of becoming larger than any pre-assigned (finite) quantity, nevertheless it is to mean, in that of all times is merely finite, which holds in particular of every numerical quantity 1, 2, 3, 4, 5.

In other words, for Bolzano there could be a true infinity that was not a variable something that was only bigger than anything you might specify. Such a true infinity was the result of joining two pints together and extending that ligne in both directions without stopping. And what is more, he could separate off the demands of calculus, using a finite quality without ever bothering with the slippery potential infinity. Here was both a deeper understanding of the nature of infinity and the basis on which are built in his safe infinity free calculus.

This use of the inexhaustible follows on directly from most Bolzanos criticism of the way that ∞ we used as a variable something that would be bigger than anything you could specify, but never quite reached the true, absolute infinity. In Paradoxes of the Infinity Bolzano points out that is possible for a quantity merely capable of becoming larger than any one pre-assigned (finite) quantity, nevertheless to remain at all times merely finite.

Bolzano intended tis as a criticism of the way infinity was treated, but Professor Jacquette sees it instead of a way of masking use of practical applications like calculus without the need for weasel words about infinity.

By replacing ∞ with ¤ we do away with one of the most common requirements for infinity, but is there anything left that map out to the real world? Can we confine infinity to that pure mathematical other world, where anything, however unreal, can be constructed, and forget about it elsewhere? Surprisingly, this seems to have been the view, at least at one point in time, even of the German mathematician and founder of set-theory Georg Cantor (1845-1918), himself, whose comments in 1883, that only the finite numbers are real.

Keeping within the lines of reason, both the Cambridge mathematician and philosopher Frank Plumpton Ramsey (1903-30) and the Italian mathematician G. Peano (1858-1932) have been to distinguish logical paradoxes and that depend upon the notion of reference or truth (semantic notions), such are the postulates justifying mathematical induction. It ensures that a numerical series is closed, in the sense that nothing but zero and its successors can be numbers. In that any series satisfying a set of axioms can be conceived as the sequence of natural numbers. Candidates from set theory include the Zermelo numbers, where the empty set is zero, and the successor of each number is its unit set, and the von Neuman numbers, where each number is the set of all smaller numbers. A similar and equally fundamental complementarity exists in the relation between zero and infinity. Although the fullness of infinity is logically antithetical to the emptiness of zero, infinity can be obtained from zero with a simple mathematical operation. The division of many numbers by zero is infinity, while the multiplication of any number by zero is zero.

With the set theory developed by the German mathematician and logician Georg Cantor. From 1878 to 1807, Cantor created a theory of abstract sets of entities that eventually became a mathematical discipline. A set, as he defined it, is a collection of definite and distinguished objects in thought or perception conceived as a whole.

Cantor attempted to prove that the process of counting and the definition of integers could be placed on a solid mathematical foundation. His method was to repeatedly place the elements in one set into one-to-one correspondence with those in another. In the case of integers, Cantor showed that each integer (1, 2, 3, . . . n) could be paired with an even integers (2, 4, 6, . . . n), and, therefore, that the set of all integers was equal to the set of all even numbers.

Amazingly, Cantor discovered that some infinite sets were large than others and that infinite sets formed a hierarchy of greater infinities. After this failed attempt to save the classical view of logical foundations and internal consistency of mathematical systems, it soon became obvious that a major crack had appeared in the seemingly sold foundations of number and mathematics. Meanwhile, an impressive number of mathematicians began to see that everything from functional analysis to the theory of real numbers depended on the problematic character of number itself.

While, in the theory of probability Ramsey was the first to show how a personalised theory could be developed, based on precise behavioural notions of preference and expectation. In the philosophy of language, Ramsey was one of the first thinkers to accept a redundancy theory of truth, which hr combined with radical views of the function of man y kinds of propositions. Neither generalizations nor causal propositions, nor those treating probability or ethics, describe facts, but each has a different specific function in our intellectual economy.

Ramsey advocates that of a sentence generated by taking all the sentence affirmed in a scientific theory that use some term, e.g., quark. Replacing the term by a variable, and existentially quantifying into the result. Instead of saying quarks have such-and-such properties, Ramsey postdated that the sentence as saying that there is something that has those properties. If the process is repeated, the sentence gives the topic-neutral structure of the theory, but removes any implications that we know what the term so treated denote. It leaves open the possibility of identifying the theoretical item with whatever it is that best fits the description provided. Nonetheless, it was pointed out by the Cambridge mathematician Newman that if the process is carried out for all except the logical bones of the theory, then by the Löwenheim-Skolem theorem, the result will be interpretable in any domain of sufficient cardinality, and the content of the theory may reasonably be felt to have been lost.

It seems, that the most taken of paradoxes in the foundations of set theory as discovered by Russell in 1901. Some classes have themselves as members: The class of all abstract objects, for example, is an abstract object, whereby, others do not: The class of donkeys is not itself a donkey. Now consider the class of all classes that are not members of themselves, is this class a member of itself, that, if it is, then it is not, and if it is not, then it is.

The paradox is structurally similar to easier examples, such as the paradox of the barber. Such one like a village having a barber in it, who shaves all and only the people who do not have in themselves. Who shaves the barber? If he shaves himself, then he does not, but if he does not shave himself, then he does not. The paradox is actually just a proof that there is no such barber or in other words, that the condition is inconsistent. All the same, it is no to easy to say why there is no such class as the one Russell defines. It seems that there must be some restriction on the kind of definition that are allowed to define classes and the difficulty that of finding a well-motivated principle behind any such restriction.

The French mathematician and philosopher Henri Jules Poincaré (1854-1912) believed that paradoses like those of Russell nd the barber were due to such as the impredicative definitions, and therefore proposed banning them. But, it tuns out that classical mathematics required such definitions at too many points for the ban to be easily absolved. Having, in turn, as forwarded by Poincaré and Russell, was that in order to solve the logical and semantic paradoxes it would have to ban any collection (set) containing members that can only be defined by means of the collection taken as a whole. It is, effectively by all occurring principles into which have an adopting vicious regress, as to mark the definition for which involves no such failure. There is frequently room for dispute about whether regresses are benign or vicious, since the issue will hinge on whether it is necessary to reapply the procedure. The cosmological argument is an attempt to find a stopping point for what is otherwise seen as being an infinite regress, and, to ban of the predicative definitions.

The investigation of questions that arise from reflection upon sciences and scientific inquiry, are such as called of a philosophy of science. Such questions include, what distinctions in the methods of science? s there a clear demarcation between scenes and other disciplines, and how do we place such enquires as history, economics or sociology? And scientific theories probable or more in the nature of provisional conjecture? Can the be verified or falsified? What distinguished good from bad explanations? Might there be one unified since, embracing all the special science? For much of the 20th century there questions were pursued in a highly abstract and logical framework it being supposed that as general logic of scientific discovery that a general logic of scientific discovery a justification might be found. However, many now take interests in a more historical, contextual and sometimes sociological approach, in which the methods and successes of a science at a particular time are regarded less in terms of universal logical principles and procedure, and more in terms of their availability to methods and paradigms as well as the social context.

In addition, to general questions of methodology, there are specific problems within particular sciences, giving subjects as biology, mathematics and physics.

The intuitive certainty that sparks aflame the dialectic awarenesses for its immediate concerns are either of the truth or by some other in an object of apprehensions, such as a concept. Awareness as such, has to its amounting quality value the place where philosophical understanding of the source of our knowledge are, however, in covering the sensible apprehension of things and pure intuition it is that which stricture sensation into the experience of things accent of its direction that orchestrates the celestial overture into measures in space and time.

The notion that determines how something is seen or evaluated of the status of law and morality especially associated with St Thomas Aquinas and the subsequent scholastic tradition. More widely, any attempt to cement the moral and legal order together with the nature of the cosmos or how the nature of human beings, for which sense it is also found in some Protestant writers, and arguably derivative from a Platonic view of ethics, and is implicit in ancient Stoicism. Law stands above and apart from the activities of human lawmaker, it constitutes an objective set of principles that can be seen true by natural light or reason, and (in religion versions of the theory) that express Gods will for creation. Non-religious versions of the theory substitute objective conditions for human flourishing as the source of constraints upon permissible actions and social arrangements. Within the natural law tradition, different views have been held about the relationship between the rule of law about God s will, for instance the Dutch philosopher Hugo Grothius (1583-1645), similarly takes upon the view that the content of natural law is independent of any will, including that of God, while the German theorist and historian Samuel von Pufendorf (1632-94) takes the opposite view, thereby facing the problem of one horn of the Euthyphro dilemma, that simply states, that its dilemma arises from whatever the source of authority is supposed to be, for in which do we care about the general good because it is good, or do we just call good things that we care about. Wherefore, by facing the problem that may be to assume of a strong form, in which it is claimed that various facts entail values, or a weaker form, from which it confines itself to holding that reason by itself is capable of discerning moral requirements that are supped of binding to all human bings regardless of their desires

Although the morality of people send the ethical amount from which the same thing, is that there is a usage that restricts morality to systems such as that of the German philosopher and founder of ethical philosophy Immanuel Kant (1724-1804), based on notions such as duty, obligation, and principles of conduct, reserving ethics for more than the Aristotelian approach to practical reasoning based on the notion of a virtue, and generally avoiding the separation of moral considerations from other practical considerations. The scholarly issues are complex, with some writers seeing Kant as more Aristotelian and Aristotle as, ore involved in a separate sphere of responsibility and duty, than the simple contrast suggests. Some theorists see the subject in terms of a number of laws (as in the Ten Commandments). The status of these laws may be test they are the edicts of a divine lawmaker, or that they are truths of reason, knowable deductively. Other approaches to ethics (e.g., eudaimonism, situation ethics, virtue ethics) eschew general principles as much as possible, frequently disguising the great complexity of practical reasoning. For Kantian notion of the moral law is a binding requirement of the categorical imperative, and to understand whether they are equivalent at some deep level. Kants own applications of the notion are not always convincing, as for one cause of confusion in relating Kants ethics to theories such additional expressivism is that it is easy, but mistaken, to suppose that the categorical nature of the imperative means that it cannot be the expression of sentiment, but must derive from something unconditional or necessary such as the voice of reason.

For which ever reason, the mortal being makes of its presence to the future of weighing of that which one must do, or that which can be required of one. The term carries implications of that which is owed (due) to other people, or perhaps in onself. Universal duties would be owed to persons (or sentient beings) as such, whereas special duty in virtue of specific relations, such as being the child of someone, or having made someone a promise. Duty or obligation is the primary concept of deontological approaches to ethics, but is constructed in other systems out of other notions. In the system of Kant, a perfect duty is one that must be performed whatever the circumstances: Imperfect duties may have to give way to the more stringent ones. In another way, perfect duties are those that are correlative with the right to others, imperfect duties are not. Problems with the concept include the ways in which due needs to be specified (a frequent criticism of Kant is that his notion of duty is too abstract). The concept may also suggest of a regimented view of ethical life in which we are all forced conscripts in a kind of moral army, and may encourage an individualistic and antagonistic view of social relations.

The most generally accepted account of externalism and/or internalism, that this distinction is that a theory of justification is internalist if only if it requiem that all of the factors needed for a belief to be epistemologically justified for a given person be cognitively accessible to that person, internal to cognitive perceptivity, and externalist, if it allows that at least some of the justifying factors need not be thus accessible, so that thy can be external to the believers cognitive perceptive, beyond any such given relations. However, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering any very explicit explication.

The externalist/internalist distinction has been mainly applied to theories of epistemic justification: It has also been applied in a closely related way to accounts of knowledge and in a rather different way to accounts of belief and thought contents.

The internalist requirement of cognitive accessibility can be interpreted in at least two ways: A strong version of internalism would require that the believe actually be aware of the justifying factor in order to be justified: While a weaker version would require only that he be capable of becoming aware of them by focussing his attentions appropriately, but without the need for any change of position, new information, etc. Though the phrase cognitively accessible suggests the weak interpretation, the main intuitive motivation for internalism, viz the idea that epistemic justification requires that the believe actually have in his cognitive possession a reason for thinking that the belief is true, and would require the strong interpretation.

Perhaps, the clearest example of an internalist position would be a Foundationalist view according to which foundational beliefs pertain to immediately experienced states of mind and other beliefs are justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Such a view could count as either a strong or a weak version of internalism, depending on whether actual awareness of the justifying elements or only the capacity to become aware of them is required. Similarly, a coherent view could also be internalist, if both the beliefs or other states with which a justification belief is required to cohere and the coherence relations themselves are reflectively accessible.

It should be carefully noticed that when internalism is construed in this way, it is neither necessary nor sufficient by itself for internalism that the justifying factors literally be internal mental states of the person in question. Not necessary, necessary, because on at least some views, e.g., a direct realist view of perception, something other than a mental state of the believe can be cognitively accessible: Not sufficient, because there are views according to which at least some mental states need not be actual (strong version) or even possible (weak version) objects of cognitive awareness. Also, on this way of drawing the distinction, a hybrid view, according to which some of the factors required for justification must be cognitively accessible while others need not and in general will not be, would count as an externalist view. Obviously too, a view that was externalist in relation to a strong version of internalism (by not requiring that the believe actually be aware of all justifiable factors) could still be internalist in relation to a weak version (by requiring that he at least be capable of becoming aware of them).

The most prominent recent externalist views have been versions of reliabilism, whose requirements for justification is roughly that the belief be produced in a way or via a process that makes of objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access to the relations of reliability in question. Lacking such access, such a person will in general have no reason for thinking that the belief is true or likely to be true , but will, on such an account, nonetheless be epistemically justified in according it. Thus such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.

The main objection to externalism rests on the intuitive certainty that the basic requirement for epistemic justification is that the acceptance of the belief in question be rational or responsible in relation to the cognitive goal of truth, which seems to require in turn that the believe actually be dialectally aware of a reason for thinking that the belief is true (or, at the very least, that such a reason be available to him). Since the satisfaction of an externalist condition is neither necessary nor sufficient for the existence of such a cognitively accessible reason, it is argued, externalism is mistaken as an account of epistemic justification. This general point has been elaborated by appeal to two sorts of putative intuitive counter-examples to externalism. The first of these challenges the necessity of belief which seem intuitively to be justified, but for which the externalist conditions are not satisfied. The standard examples in this sort are cases where beliefs are produced in some very nonstandard way, e.g., by a Cartesian demon, but nonetheless, in such a way that the subjective experience of the believe is indistinguishable from that of someone whose beliefs are produced more normally. The intuitive claim is that the believe in such a case is nonetheless epistemically justified, as much so as one whose belief is produced in a more normal way, and hence that externalist account of justification must be mistaken.

Perhaps the most striking reply to this sort of counter-example, on behalf of a cognitive process is to be assessed in normal possible worlds, i.e., in possible worlds that are actually the way our world is common-seismically believed to be, than in the world which contains the belief being judged. Since the cognitive processes employed in the Cartesian demon cases are, for which we may assume, reliable when assessed in this way, the reliability can agree that such beliefs are justified. The obvious, to a considerable degree of bringing out the issue of whether it is or not an adequate rationale for this construal of Reliabilism, so that the reply is not merely a notional presupposition guised as having representation.

The correlative way of elaborating on the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. In this context, the most widely discussed examples have to do with possible occult cognitive capacities, like clairvoyance. Considering the point in application once, again, to Reliabilism, the claim is that to think that he has such a cognitive power, and, perhaps, even good reasons to the contrary, is not rational or responsible and therefore not epistemically justified in accepting the belief that result from his clairvoyance, despite the fact that the Reliabilist condition is satisfied.

One sort of response to this latter sorts of objection is to bite the bullet and insist that such believers are in fact justified, dismissing the seeming intuitions to the contrary as latent internalist prejudice. A more widely adopted response attempts to impose additional conditions, usually of a roughly internalized sort, which will rule out the offending example, while stopping far of a full internalism. But, while there is little doubt that such modified versions of externalism can handle particular cases, as well enough to avoid clear intuitive implausibility, the usually problematic cases that they cannot handle, and also whether there is and clear motivation for the additional requirements other than the general internalist view of justification that externalist are committed to reject.

A view in this same general vein, one that might be described as a hybrid of internalism and externalism holds that epistemic justification requires that there is a justificatory factor that is cognitively accessible to the believe in question (though it need not be actually grasped), thus ruling out, e.g., a pure Reliabilism. At the same time, however, though it must be objectively true that beliefs for which such a factor is available are likely to be true, in addition, the fact need not be in any way grasped or cognitively accessible to the believe. In effect, of the premises needed to argue that a particular belief is likely to be true, one must be accessible in a way that would satisfy at least weakly internalised. The internalist will respond that this hybrid view is of no help at all in meeting the objection and has no belief nor is it held in the rational, responsible way that justification intuitively seems to require, for the believe in question, lacking one crucial premise, still has no reason at all for thinking that his belief is likely to be true.

An alternative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., a result of a reliable process (and perhaps, further conditions as well). This makes it possible for such a view to retain internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.

Such an externalist account of knowledge can accommodate the commonsense conviction that animals, young children, and unsophisticated adults posses knowledge, though not the weaker conviction (if such a conviction does exists) that such individuals are epistemically justified in their beliefs. It is also at least less vulnerable to internalist counter-examples of the sort discussed, since the intuitions involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge is supposed to have. In particular, does it have any serious bearing on traditional epistemological problems and on the deepest and most troubling versions of scepticism, which seems in fact to be primarily concerned with justification, the an knowledge?`

A rather different use of the terms internalism and externalism has to do with the issue of how the content of beliefs and thoughts is determined: According to an internalist view of content, the content of such intention states depends only on the non-relational, internal properties of the individuals mind or grain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors and suggests a view that appears of both internal and external elements are standardly classified as an external view.

As with justification and knowledge, the traditional view of content has been strongly internalized in character. The main argument for externalism derives from the philosophy of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as direct reference theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criteria employed by expects in his social group, etc. - not just on what is going on internally in his mind or brain.

An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought from the inside, simply by reflection. If content is depend on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalized account of mental content would seem to support an externalist account of justification, by way that if part or all of the content of a belief inaccessible to the believe, then both the justifying status of other beliefs in relation to that content and the status of that content justifying the beliefs will be similarly inaccessible, thus contravening the internalist requirement for justification. An internalist must insist that there are no justifiable relations of these sorts, that our internally associable content can either be justified or justly anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.

A great deal of philosophical effort has been lavished on the attempt to naturalize content, i.e. to explain in non-semantic,. Non-intentional terms what it is for something to be represental (have content) and what it is for something to have some particular content rather than some other. There appear to be only four types of theory that have been proposed: Theories that ground representation in (1) similarity, (2) conversance, (3) functional role, (4) teleology.

Similarly, theories hold that 'r' represents 'x' in virtue of being similar to 'x'. This has seemed hopeless to most as a theory of mental representation because it appears to require that things in the brain must share properties with the things they represent: To represent a cat as furry appears to require something furry in the brain. Perhaps, a notion of similarity that is naturalistic and does not involve property sharing can be worked out, but it is not obvious how.

Finally, that while the formalism of quantum physics predicts that correlations between particles over space-like inseparability, of which are possible, it can say nothing about what this strange new relationship between parts (quanta) and the whole (cosmos) cause to result outside this formalism. This does not, however, prevent us from considering the implications in philosophical terms. As the philosopher of science Errol Harris noted in thinking about the special character of wholeness in modern physics, a unity without internal content is a blank or empty set and is not recognizable as a whole. A collection of merely externally related parts does not constitute a whole in that the parts will not be “mutually adaptive and complementary to one-another.”

Wholeness requires a complementary relationship between unity and difference and is governed by a principle of organization determining the interrelationship between parts. This organizing principle must be universal to a genuine whole and implicit in all parts constituting the whole, even the whole is exemplified only in its parts. This principle of order, Harris continued, “is nothing really in and of itself. It is the way he parts are organized, and another constituent additional to those that constitute the totality.”

In a genuine whole, the relationship between the constituent parts must be “internal or immanent” in the parts, as opposed to a more spurious whole in which parts appear to disclose wholeness dur to relationships that are external to the arts. The collection of parts that would allegedly constitute the whole in classical physics is an example of a spurious whole. Parts continue a genuine whole when the universal principle of order is inside the parts and hereby adjusts each to all so that they interlock and become mutually complementary. This not only describes the character of the whole revealed in both relativity theory and quantum mechanics. It is also consistent with the manner in which we have begun to understand the relations between parts and whole in modern biology.

Modern physics also reveals, claimed Harris, complementary relationship between the differences between parts that constitute and the universal ordering principle that are immanent in each part. While the whole cannot be finally disclosed in the analysis of the parts, the study of the differences between parts provides insights into the dynamic structure of the whole present in each part. The part can never, however, be finally isolated from the web of relationships that discloses the interconnections with the whole, and any attempt to do so results in ambiguity.

Much of the ambiguity in attempts to explain the character of wholes in both physics and biology derives from the assumption that order exists between or outside parts. Yet order in complementary relationships between difference and sameness in any physical event is never external to that event, and the cognations are immanent in the event. From this perspective, the addition of non-locality to this picture of the distributive constitution in dynamic function of wholeness is not surprising. The relationships between part, as quantum event apparent in observation or measurement, and the undissectable whole, calculate on in but are not described by the instantaneous correlations between measurements in space-like separate regions, is another extension of the part-whole complementarity in modern physics.

If the universe is a seamlessly interactive system that evolves to higher levels of complex and complicating regularities of which ae lawfully emergent in property of systems, we can assume that the cosmos is a single significant whole that evinces progressive order in complementary relations to its parts. Given that this whole exists in some sense within all parts (quanta), one can then argue that in operates in self-reflective fashion and is the ground from all emergent plexuities. Since human consciousness evinces self-reflective awareness in te human brain (well protected between the cranium walls) and since this brain, like all physical phenomena, can be viewed as an emergent property of the whole, it is unreasonable to conclude, in philosophical terms at least, that the universe is conscious.

Nevertheless, since the actual character of this seamless whole cannot be represented or reduced to its parts, it lies, quite laterally, beyond all human representation or descriptions. If one chooses to believe that the universe be a self-reflective and self-organizing whole, this lends no support whatsoever to conceptual representation of design, meaning, purpose, intent, or plan associated with mytho-religious or cultural heritage. However, if one does not accept this view of the universe, there is noting in the scientific description of nature that can be used to refute this position. On the other hand, it is no longer possible to argue that a profound sense of unity with the whole, which has long been understood as foundation of religious experiences, but can be dismissed, undermined, or invalidated with appeals to scientific knowledge.

While we have consistently tried to distinguish between scientific knowledge and philosophical speculation based on this of what is obtainable, let us be quite clear on one point - there is no empirically valid causal linkage between the former and the latter. Those who wish to dismiss the speculative base on which is obviously free to do as done. However, there is another conclusion to be drawn, in that is firmly grounded in scientific theory and experiment there is no basis in the scientific descriptions of nature for believing in the radical Cartesian division between mind and world sanctioned by classical physics. Clearly, his radical separation between mind and world was a macro-level illusion fostered by limited awareness of the actual character of physical reality nd by mathematical idealizations extended beyond the realms of their applicability.

Nevertheless, the philosophical implications might prove in themselves as a criterial motive in debative consideration to how our proposed new understanding of the relationship between parts and wholes in physical reality might affect the manner in which we deal with some major real-world problems. This will issue to demonstrate why a timely resolution of these problems is critically dependent on a renewed dialogue between members of the cultures of human-social scientists and scientist-engineers. We will also argue that the resolution of these problems could be dependent on a renewed dialogue between science and religion.

As many scholars have demonstrated, the classical paradigm in physics has greatly influenced and conditioned our understanding and management of human systems in economic and political realities. Virtually all models of these realities treat human systems as if they consist of atomized units or parts that interact with one another in terms of laws or forces external to or between the parts. These systems are also viewed as hermetic or closed and, thus, its discreteness, separateness and distinction.

Consider, for example, how the classical paradigm influenced or thinking about economic reality. In the eighteenth and nineteenth centuries, the founders of classical economics -figures like Adam Smith, David Ricardo, and Thomas Malthus conceived of the economy as a closed system in which intersections between parts (consumer, produces, distributors, etc.) are controlled by forces external to the parts (supply and demand). The central legitimating principle of free market economics, formulated by Adam Smith, is that lawful or law-like forces external to the individual units function as an invisible hand. This invisible hand, said Smith, frees the units to pursue their best interests, moves the economy forward, and in general legislates the behaviour of parts in the best vantages of the whole. (The resemblance between the invisible hand and Newton’s universal law of gravity and between the relations of parts and wholes in classical economics and classical physics should be transparent.)

After roughly 1830, economists shifted the focus to the properties of the invisible hand in the interactions between parts using mathematical models. Within these models, the behaviour of parts in the economy is assumed to be analogous to the awful interactions between pats in classical mechanics. It is, therefore, not surprising that differential calculus was employed to represent economic change in a virtual world in terms of small or marginal shifts in consumption or production. The assumption was that the mathematical description of marginal shifts in the complex web of exchanges between parts (atomized units and quantities) and whole (closed economy) could reveal the lawful, or law-like, machinations of the closed economic system.

These models later became one of the fundamentals for microeconomics. Microeconomics seek to describe interactions between parts in exact quantifiable measures - such as marginal cost, marginal revenue, marginal utility, and growth of total revenue as indexed against individual units of output. In analogy with classical mechanics, the quantities are viewed as initial conditions that can serve to explain subsequent interactions between parts in the closed system in something like deterministic terms. The combination of classical macro-analysis with micro-analysis resulted in what Thorstein Veblen in 1900 termed neoclassical economics - the model for understanding economic reality that is widely used today.

Beginning in the 1939s, the challenge became to subsume the understanding of the interactions between parts in closed economic systems with more sophisticated mathematical models using devices like linear programming, game theory, and new statistical techniques. In spite of the growing mathematical sophistication, these models are based on the same assumptions from classical physics featured in previous neoclassical economic theory - with one exception. They also appeal to the assumption that systems exist in equilibrium or in perturbations from equilibria, and they seek to describe the state of the closed economic system in these terms.

One could argue that the fact that our economic models are assumptions from classical mechanics is not a problem by appealing to the two-domain distinction between micro-level macro-level processes expatiated upon earlier. Since classical mechanic serves us well in our dealings with macro-level phenomena in situations where the speed of light is so large and the quantum of action is so small as to be safely ignored for practical purposes, economic theories based on assumptions from classical mechanics should serve us well in dealing with the macro-level behaviour of economic systems.

The obvious problem, . . . acceded peripherally, . . . nature is relucent to operate in accordance with these assumptions, in that the biosphere, the interaction between parts be intimately related to the whole, no collection of arts is isolated from the whole, and the ability of the whole to regulate the relative abundance of atmospheric gases suggests that the whole of the biota appear to display emergent properties that are more than the sum of its parts. What the current ecological crisis reveal in the abstract virtual world of neoclassical economic theory. The real economies are all human activities associated with the production, distribution, and exchange of tangible goods and commodities and the consumption and use of natural resources, such as arable land and water. Although expanding economic systems in the real economy are obviously embedded in a web of relationships with the entire biosphere, our measure of healthy economic systems disguises this fact very nicely. Consider, for example, the healthy economic system written in 1996 by Frederick Hu, head of the competitive research team for the World Economic Forum - short of military conquest, economic growth is the only viable means for a country to sustain increases in natural living standards . . . An economy is internationally competitive if it performs strongly in three general areas: Abundant productive ideas from capital, labour, infrastructure and technology, optimal economic policies such as low taxes, little interference, free trade and sound market institutions. Such as the rule of law and protection of property rights.

The prescription for medium-term growth of economies in countries like Russia, Brazil, and China may seem utterly pragmatic and quite sound. But the virtual economy described is a closed and hermetically sealed system in which the invisible hand of economic forces allegedly results in a health growth economy if impediments to its operation are removed or minimized. It is, of course, often trued that such prescriptions can have the desired results in terms of increases in living standards, and Russia, Brazil and China are seeking to implement them in various ways.

In the real economy, however, these systems are clearly not closed or hermetically sealed: Russia uses carbon-based fuels in production facilities that produce large amounts of carbon dioxide and other gases that contribute to global warming: Brazil is in the process of destroying a rain forest that is critical to species diversity and the maintenance of a relative abundance of atmospheric gases that regulate Earth temperature, and China is seeking to build a first-world economy based on highly polluting old-world industrial plants that burn soft coal. Not to forget, . . . the virtual economic systems that the world now seems to regard as the best example of the benefits that can be derived form the workings of the invisible hand, that of the United States, operates in the real economy as one of the primary contributors to the ecological crisis.

In “Consilience,” Edward O. Wilson makes to comment, the case that effective and timely solutions to the problem threatening human survival is critically dependent on something like a global revolution in ethical thought and behaviour. But his view of the basis for this revolution is quite different from our own. Wilson claimed that since the foundations for moral reasoning evolved in what he termed ‘gene-culture’ evolution, the rules of ethical behaviour re emergent aspects of our genetic inheritance. Based on the assumptions that the behaviour of contemporary hunter-gatherers resembles that of our hunter-gatherers forebears in the Palaeolithic Era, he drew on accounts of Bushman hunter-gatherers living in the centre Kalahari in an effort to demonstrate that ethical behaviour is associated with instincts like bonding, cooperation, and altruism.

Wilson argued that these instincts evolved in our hunter-gatherer accessorial descendabilities, whereby genetic mutation and the ethical behaviour associated with these genetically based instincts provided a survival advantage. He then claimed that since these genes were passed on to subsequent generations of our descendable characteristics, which eventually became pervasive in the human genome, the ethical dimension of human nature has a genetic foundation. When we fully understand the “innate epigenetic rules of moral reasoning,” it seems probable that the rules will probably turn out to be an ensemble of many algorithms whose interlocking activities guide the mind across a landscape of nuances moods and choices.

Any reasonable attempt to lay a firm foundation beneath the quagmire of human ethics in all of its myriad and often contradictory formulations is admirable, and Wilson’s attempt is more admirable than most. In our view, however, there is little or no prospect that it will prove successful for a number of reasons. Wile te probability for us to discover some linkage between genes and behaviour, seems that the lightened path of human ethical behaviour and ranging advantages of this behaviour is far too complex, not o mention, inconsistently been reduced to a given set classification of “epigenetic ruled of moral reasoning.”

Also, moral codes and recoding may derive in part from instincts that confer a survival advantage, but when we are the examine to these codes, it also seems clear that they are primarily cultural products. This explains why ethical systems are constructed in a bewildering variety of ways in different cultural contexts and why they often sanction or legitimate quite different thoughts and behaviours. Let us not forget that rules f ethical behaviours are quite malleable and have been used sacredly to legitimate human activities such as slavery, colonial conquest, genocide and terrorism. As Cardinal Newman cryptically put it, “Oh how we hate one another for the love of God.”

According to Wilson, the “human mind evolved to believe in the gods” and people “need a sacred narrative” to his view are merely human constructs and, therefore, there is no basis for dialogue between the world views of science and religion. “Science for its part, will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral and religiously sentient. The result of the competition between the two world views, is believed, as In, will be the secularization of the human epic and of religion itself.

Wilson obviously has a right to his opinions, and many will agree with him for their own good reasons, but what is most interesting about his thoughtful attempted is to posit a more universal basis for human ethics in that it s based on classical assumptions about the character of both physical and biological realities. While Wilson does not argue that human’s behaviour is genetically determined in the strict sense, however, he does allege that there is a causal linkage between genes and behaviour that largely condition this behaviour, he appears to be a firm believer in classical assumption that reductionism can uncover the lawful essences that principally govern the physical aspects that were attributed to reality, including those associated with the alleged “epigenetic rules of moral reasoning.”

Once, again, Wilson’s view is apparently nothing that cannot be reduced to scientific understandings or fully disclosed in scientific terms, and this apparency of hope for the future of humanity is that the triumph of scientific thought and method will allow us to achieve the Enlightenments ideal of disclosing the lawful regularities that govern or regulate all aspects of human experience. Hence, science will uncover the “bedrock of moral and religious sentiment, and the entire human epic will be mapped in the secular space of scientific formalism.” The intent is not to denigrate Wilson’s attentive efforts to posit a more universal basis for the human condition, but is to demonstrate that any attempt to understand or improve upon the behaviour based on appeals to outmoded classical assumptions is unrealistic and outmoded. If the human mind did, in fact, evolve in something like deterministic fashion in gene-culture evolution - and if there were, in fact, innate mechanisms in mind that are both lawful and benevolent. Wilson’s program for uncovering these mechanisms could have merit. But for all the reasons that have been posited, classical determinism cannot explain the human condition and its evolutionary principle that govern in their functional dynamics, as Darwinian evolution should be modified to acclimatize the complementary relationships between cultural and biological principles that governing evaluations do indeed have in them a strong, and firm grip upon genetical mutations that have attributively been the distribution in the contribution of human interactions with themselves in the finding to self-realizations and undivided wholeness.

Freud’s use of the word “superman” or “overman”in and of itself might indicate only a superficial familiarity with a popular term associated with Nietzsche. However, as Holmes has pointed out, Freud is discussing the holy, or saintly , and its relation to repression and the giving up of freedom of instinctual expression, central concerns of the third essay of on the Genealogy of Morals, ‘What is the Meaning of Ascetic Ideals.’

Nietzsche writes of the anti-nature of the ascetic ideal, how it relates to a disgust with oneself, its continuing destructive effect upon the health of Europeans, and how it relates to the realm of ‘subterranean revenge’ and ressentiment. In addition, Nietzsche writes of the repression of instincts (though not specifically on impulses toward sexual perversions) and of their being turned inward against the self. Continuing, he wrote on the ‘instinct for freedom forcibly made latent . . . this instinct for freedom pushed back and repressed. In closing, and even more of the animal, and more still of the material: Zarathustra also speaks of most sacred, now he must find allusion caprice, even in the most sacred, that freedom from his love may become his prey. The formulation as it pertains to sexual perversions and incest certainly does not derive from Nietzsche (although, along different lines incest was an important factor in Nietzsche’s understanding of Oedipus), the relating freedom was very possibly influenced by Nietzsche, particularly in light of Freud’s reference as the ‘holy’; as well as to the ‘overman’. As these of issues re explored in the Antichrist which had been published just two years earlier.

Nietzsche had written of sublimation, and he specifically wrote of sublimation of sexual drives in the Genealogy. Freud’s use of the term as differing somewhat from his later and more Nietzschean usage such as in Three Essays on the Theory of Sexuality, but as Kaufmann notes, while ‘the word is older than either Freud or Nietzsche . . . it was Nietzsche who first gave it the specific connotation it has today’. Kaufmann regards the concept of sublimation as the most important concepts in Nietzsche’s entire philosophy.

Of course it is difficult to determine whether or not Freud may have been recently reading Nietzsche or was consciously or unconsciously drawing on information he had come across some years earlier. It is also possible that Freud had recently of some time earlier, registered a limited resource of the Genealogy or other works. At a later time in his life Freud claimed he could not read more than a few passage s of Nietzsche due to being overwhelmed by the wealth of ideas. This claim might be supported by the fact that Freud demonstrates only a limited understanding of certain of Nietzsche’s concepts. For example, his reference to the ‘overman’, such in showing a lack of understanding of the self-overcoming and sublimation, not simply freely gratified primitive instincts. Later in life, Freud demonstrates a similar misunderstanding in his equation the overman with the tyrannical father of the primal horde. Perhaps Freud confused the overman with he ‘master’ whose morality is contrasted with that of ‘slave ‘ morality in the Genealogy and Beyond Good and Evil. The conquering master more freely gratifies instinct and affirms himself, his world and has values as good. The conquered slave, unable to express himself freely, creates negating, resentful, vengeful morality glorifying his own crippled. Alienated condition, and her crates a division not between goof (noble) and bad (Contemptible), but between good (undangerous) and evil (wicked and powerful - dangerous ness).

Much of what Rycroft writes is similar to, implicit in, or at least compatible with what we have seen of Nietzsche’s theoretical addresses as to say, as other materia that has been placed on the table fr consideration. Rycroft specifically states that h takes up ‘a position much nearer Groddeck’s [on the nature of the, “it” or, id] than Freud’s. He doesn’t mention that Freud was ware of Groddeck’s concept of the “it” and understood the term to be derived from Nietzsche. However, beyond ‘the process itself; as a consequence of grammatical habit - that the activity, ‘thinking’, requires an agent.

The self, as in its manifesting in constructing dreams, ma y be an aspect of our psychic live tat knows things that our waking “In” or ego may not know and may not wish to know, and a relationship ma y be developed between these aspects of our psychic lives in which the latter opens itself creatively to the communications of he former. Zarathustra states: ‘Behind your thoughts and feelings, my brother, there stands a mighty ruler, an unknown sage - whose name is self. In your body he dwells, he is your body’. Nonetheless, Nietzsche’s self cannot be understood as a replacement for an all-knowing God to whom the “I” or ego appeals for its wisdom, commandments, guidance and the like. To open oneself to another aspect of oneself that is wiser (an unknown sage) in the sense that new information can be derived from it, does not necessarily entail that this ‘wiser’ component of one’s psychic life has God-like knowledge and commandments which if one (one’s “I-nesses”) deciphers and opens correctly to will set one on the straight path. It is true though that when Nietzsche writes of the self as ‘a mighty ruler an unknown sage ‘ he does open himself to such an interpretation and even to the possibility that this ‘ruler’ is unreachable, unapproachable for the “I.” (Nietzsche/Zarathustra redeeming the body) and after “On the Despisers of he Body, makes it clear, that there are aspects of our psychic selves that interpret the body, that mediate its directives, ideally in ways that do not deny the body but aid in the body doing ‘what it would do above all else, to create beyond itself’.

Also the idea of a fully formed, even if the unconscious, ‘mighty ruler’ and ‘unknown sage ‘ as a true self beneath an only apparent surface is at odds with Nietzsche ‘s idea that there is no one true, stable, enduring self in and of itself, to be found once of the veil in appearance is removed. And even early in his career Nietzsche wrote sarcastically of ‘that cleverly discovered well of inspiration, the unconscious’. There is, though, a tension in Nietzsche between the notion of bodily-based drive is pressing for discharge (which can, among other things, (sublimated) and a more organized bodily-based self which may be ‘an unknown sage’ and in relation to which the “I-ness” may open to potential communications in the manner for which there is no such conception of self for which Freud and the dream is not produced with the intention of being understood.

Nietzsche explored the ideas of psychic energy and drives pressing for discharge. His discussion on sublimation typically implies an understanding of drives in just such a sense as does his idea that dreams provide for discharge of drives. Nonetheless, he did not relegate all that is derived from instinct and the body to this realm. While for Nietzsche there is no stable, enduring true self awaiting discovery and liberation, the body and the self (in the broadest sense of the term, including what is unconscious and may be at work in dreams as Rycroft describes it) may offer up potential communication and direct to the “I” or ego. However, at times Nietzsche describes the “I” or ego as having very little, if any, idea as to how it is being by the “it.”

Nietzsche, like Freud, describe of two types of mental possesses, on which ‘binds’ [man’s] life to reason its concepts, such of an order as not to be swept away by the current and to lose himself, the other, pertaining to the worlds of myth, art and the dream, ‘constantly showing the desire to shape the existing world of the wide-wake person to be variegatedly irregular and disinterested, incoherent, exciting and eternally new, as is the world of dreams’. Art may function as a ’middle sphere’ and ‘middle faculty’ (transitional sphere and faculty) between a more primitive ‘metaphor-world’ of impressions and the forms of uniform abstract concepts.

Again, Nietzsche, like Freud attempts to account for the function of consciousness in light of the new under stranding of conscious mental functioning. Nietzsche distinguishes between himself and ‘older philosophers’ who do not appreciate the significance of unconscious mental functioning, while Freud distinguishes the unconscious of philosophers and the unconscious of psychoanalysis. What is missing is the acknowledgement of Nietzsche as philosopher and psychologist whose idea as on unconscious mental functioning have very strong affinities with psychoanalysis, as Freud himself will mention on a number of other occasions. Neither here nor in his letters to Fliess which he mentions Lipps, nor in his later paper in which Lipp (the ‘German philosopher’) is acknowledged again, is Nietzsche mentioned when it comes to acknowledging in a specific and detailed manner as important forerunner of psychoanalysis. Although Freud will state on a number of occasions that Nietzsche’s insight are close to psychoanalysis, very rarely will he state any details regarding the similarities. He mentions a friend calling his attention to the notion of the criminal from a sense of guilt, a patient calling his attention to the pride-memory aphorism, Nietzsche’s idea in dreams we cannot enter the realm of the psyche of primitive man, etc. there is never any derailed statement on just what Nietzsche anticipated pertinently to psychoanalysis. This is so even after Freud has been taking Nietzsche with him on vacation.

Equally important, the classical assumption that the only privileged or valid knowledge is scientific is one of the primary sources of the stark division between the two cultures of humanistic and scientists-engineers, in this view, Wilson is quite correct in assuming that a timely end to the two culture war and a renewer dialogue between members of those cultures is now critically important to human survival. It is also clear, however, those dreams of reason based on the classical paradigm will only serve to perpetuate the two-culture war. Since these dreams are also remnants of an old scientific world-view that no longer applies in theory in fact, to the actual character of physical reality, as reality is a probable service to frustrate the solution for which in found of a real world problem.

However, there is a renewed basis for dialogue between the two cultures, it is believed as quite different from that described by Wilson. Since classical epistemology has been displaced, or is the process of being displaced, by the new epistemology of science, the truths of science can no longer be viewed as transcendent ad absolute in the classical sense. The universe more closely resembles a giant organism than a giant machine, and it also displays emergent properties that serve to perpetuate the existence of the whole in both physics and biology that cannot be explained in terms of unrestricted determinism, simple causality, first causes, linear movements and initial conditions. Perhaps the first and most important precondition for renewed dialogue between the two cultural conflicting realizations as Einstein explicated upon its topic as, that a human being is a “part of the whole.’ It is this spared awareness that allows for the freedom, or existential choice of self-decision of determining our free-will and the power to differentiate direct parts to free ourselves of the “optical allusion”of our present conception of self as a ‘partially limited in space and time’ and to widen ‘our circle of compassion to embrace al living creatures and the whole of nature in its beauty’. Yet, one cannot, of course, merely reason oneself into an acceptance of this view, nonetheless, the inherent perceptions of the world are reason that the capacity for what Einstein termed ‘cosmic religious feelings’. Perhaps, our enabling capability for that which is within us to have the obtainable ability to enabling of our experience of self-realization, that of its realness is to sense its proven existence of a sense of elementarily leaving to some sorted conquering sense of universal consciousness, in so given to arise the existence of the universe, which really makes an essential difference to the existence or its penetrative spark of awakening indebtednesses of reciprocality?

Those who have this capacity will hopefully be able to communicate their enhanced scientific understanding of the relations among all aspects, and in part that is our self and the whole that are the universe in ordinary language wit enormous emotional appeal. The task lies before the poets of this renewing reality have nicely been described by Jonas Salk, which “man has come to the threshold of a state of consciousness, regarding his nature and his relationship to the Cosmos, in terms that reflects ‘reality’. By using the processes of Nature and metaphor, to describe the forces by which it operates upon and within Man, we come as close to describing reality as we can within the limits of our comprehension. Men will be very uneven in their capacity or such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons it will always be necessary to use metaphorical and mythical provisions as comprehensive guides to living. In this way. Man’s afforded efforts by the imagination and intellect can be playing the vital roles embarking upon the survival and his endurable evolution.

It is time, if not, only, to be concluded from evidence in its suggestive conditional relation, for which the religious imagination and the religious experience to engage upon the complementarity of truths science, as fitting that silence with meaning, as having to antiquate a continual emphasis, least of mention, that does not mean that those who do not believe in the existence of God or Being, should refrain in any sense from assessing the impletions of the new truths of science. Understanding these implications does not necessitate any ontology, and is in no way diminished by the lack of any ontology. And one is free to recognize a basis for a dialogue between science and religion for the same reason that one is free to deny that this basis exists - there is nothing in our current scientific world view that can prove the existence of God or Being and nothing that legitimate any anthropomorphic conceptions of the nature of God or Being.

The present time is clearly a time of a major paradigm shift, but consider the last great paradigm shift, the one that resulted in the Newtonian framework. This previous paradigm shift was profoundly problematic for the human spirit, it led to the conviction that we are strangers, freaks of nature, conscious beings in a universe that is almost entirely unconscious, and that, since the universe its strictly deterministic, even the free will we feel in regard to the movements of our bodies is an allusion. Yet it was probably necessary for the Western mind to go through the acceptance of such a paradigm.

In the final analysis there will be philosophers unprepared to accept that, if a given cognitive capacity is psychologically real, then there must be an explanation of how it is possible for an individual in the course of human development to acquire that cognitive capacity, or anything like it, can have a role to play in philosophical accounts of concepts and conceptual abilities. The most obvious basis for such a view would be a Frégean distrust of “psychology” that leads to a rigid division of labour between philosophy and psychology. The operative thought is that the task of a philosophical theory of concepts is to explain what a given concept is or what a given conceptual ability consist in. This, it is frequently maintained, is something that can be done in complete independence of explaining how such a concept or ability might be acquired. The underlying distinction is one between philosophical questions centring around concept possession and psychological questions centring around concept possibilities for an individual to acquire that ability, then it cannot be psychologically real. Nevertheless, this distinction is, however, strictly one does adhere to the distinction, it provides no support for a rejection of any given cognitive capacity for which is psychologically real. The neo-Frégean distinction is directly against the view that facts about how concepts are acquired have a role to play in explaining and individualizing concepts. But this view does not have to be disputed by a supporter as such, nonetheless, all that the supporter is to commit is that the principle that no satisfactory account of what a concept is should make it impossible to provide explanation of how that concept can be acquired. That is, that this principle has nothing to say about the further question of whether the psychological explanation has a role to play in a constitutive explanation of the concept, and hence is not in conflict with the neo-Frégean distinction.

A full account of the structure of consciousness, will need to illustrate those higher, conceptual forms of consciousness to which little attention on such an account will take and about how it might emerge from given points of value, is the thought that an explanation of everything that is distinctive about consciousness will emerge out of an account of what it is for a subject to be capable of thinking about himself. But, to a proper understanding of the complex phenomenon of consciousness. There are no facts about linguistic mastery that will determine or explain what might be termed the cognitive dynamics that are individual processes that have found their way forward for a theory of consciousness, it sees, to chart the characteristic features individualizing the various distinct conceptual forms of consciousness in a way that will provide a taxonomy of unconsciousness and they, to show how these manifest the Characterological functions can enhance the condition of manifesting services, whereby, its continuous condition may that it be the determinate levels of content. What is hoped is now clear is that these forms of higher forms of consciousness emerge from a rich foundation of non-conceptual representations of thought, which can only expose and clarify their conviction that these forms of conscious thought hold the key, not just to an eventful account of how mastery of the conscious paradigms, but to a proper understanding of the plexuity of self-consciousness might that it be and/or the overall conjecture of consciousness that stands alone as to an everlasting vanquishment into the abyssal of an ever-unchangeless state of unconsciousness, as are for the hidden underlying latencies.

Until very recently it might have been that most approaches to the philosophy of science were ‘cognitive’. This includes ‘logical positivism’, as nearly all of those who wrote about the nature of science would have agreed that science ought to be ‘value-free’. This had been a particular emphasis by the first positivist, as it would be upon twentieth-century successors, as science, deals with ‘facts’, and facts and values and irreducibly distinct, as facts are objective, they are what we seek in our knowledge of the world. Values are subjective: They bear the mark of human interest, they are the radically individual products of feeling and desire. Fact and value cannot, therefore, be inferred from fact, fact ought not be influenced by value. There were philosophers, notably some in the Kantian tradition, who viewed the relation of the human individual to the universalist aspiration of difference differently. However, the legacy of three centuries of largely empiricist reflection of the ‘new’ sciences ushered in by Galilee Galileo (1564-1642), the Italian scientist whose distinction belongs to the history of physics and astronomy, rather than natural philosophy.

The philosophical importance of Galileo’s science rests largely upon the following closely related achievements: (1) His stunning successful arguments against Aristotelean science, (2) his proofs that mathematics is applicable to the real world. (3) His conceptually powerful use of experiments, both actual and employed regulatively, (4) His treatment of causality, replacing appeal to hypothesized natural ends with a quest for efficient causes, and (5) his unwavering confidence in the new style of theorizing that would become known as ‘mechanical explanation’.

A century later, the maxim that scientific knowledge is ‘value-laded’ seems almost as entrenched as its opposite was earlier. It is supposed that between fact and value has been breached, and philosophers of science seem quite at home with the thought that science and value may be closely intertwined after all. What has happened to cause such an apparently radical change? What is its implications for the objectivity of science, the prized characteristic that, from Plato’s time onwards, has been assumed to set off real knowledge (epistēmē) from mere opinion (doxa)? To answer these questions adequately, one would first have to know something of the reasons behind the decline of logical positivism, as, well as of the diversity of the philosophies of science that have succeeded it.

More general, the interdisciplinary field of cognitive science is burgeoning on several fronts. Contemporary philosophical re-election about the mind - which has been quite intensive - has been influenced by this empirical inquiry, to the extent that the boundary lines between them are blurred in places.

Nonetheless, the philosophy of mind at its core remains a branch of metaphysics, traditionally conceived. Philosophers continue to debate foundational issues in terms not radically unlike in kind or character from those in vogue in previous eras. Many issues in the metaphysics of science hinge on the notion of ‘causation’. This notion is as important in science as it is in everyday thinking, and much scientific theorizing is concerned specifically to identify the ‘causes’ of various phenomena. However, there is little philosophical agreement on what it is to say that one event is the cause of another.

Modern discussion of causation starts with the Scottish philosopher, historian, and essayist David Hume (1711-76), who argued that causation is simply a matter for which he denies that we have innate ideas. In that the causal relation is observably anything other than ‘constant conjunction’ because, they are observably necessary connections anywhere, and that there is either an empirical or demonstrative proof for the assumptions: That the future will resemble the past, and that every event has a cause. That is to say, that there is an irresolvable dispute between advocates of free-will and determinism, that extreme scepticism is coherent and that we can find the experiential source of our ideas of self-substance or God.

According to Hume (1978), on event causes another if only if events of the type to which the first event belongs regularly occur in conjunctive events of the type to which the second event belongs. The formulation, however, leaves several questions open. First, there is a problem of distinguishing genuine ‘causal law’ from ‘accidental regularities’. Not all regularities are sufficiently law-like to underpin causal relationships. Being that there is a screw in my desk could well be constantly conjoined with being made of copper, without its being true that these screws are made of copper because they are in my desk. Secondly, the idea of constant conjunction does not give a ‘direction’ to causation. Causes need to be distinguished from effects. Nevertheless, knowing that A-type events are constantly conjoined with B-type events does not tell us that of ‘A’ and ‘B’ is the cause that the effect, since constant conjunction is itself a symmetric relation. Thirdly, there is a problem about ‘probabilistic causation’. When we say that causes and effects are constantly conjoined, do we mean that the effects are always found with the causes, or is it enough that the causes make the effect probable?

Many philosophers of science during the past century have preferred to talk about ‘explanation’ than causation. According to the covering-law model of explanation, something is explained if it can be deduced from premises that include one or more laws. As applied to the explanation of particular events this implies that a particular event can be explained if it is linked by a law to another particular event. However, while they are often treated as separate theories, the covering-law account of explanation is at bottom little more than a variant of Hume’s constant conjunction account of causation. This affinity shows up in the fact at the covering-law account faces essentially the same difficulties as Hume: (1) In appealing to deduction from ‘laws’, it needs to explain the difference between genuine laws and accidentally true regularities: (2) It omits by effects, as swell as effects by causes, after all, it is as easy to deduce the height of the flag-pole from the length of its shadow and the law of optics: (3) Are the laws invoked in explanation required to be exceptionalness and deterministic, or is it acceptable say, to appeal to the merely probabilistic fact that smoking makes cancer more likely, in explaining why some particular person develops cancer?

Nevertheless, one of the centrally obtainable achievements for which the philosophy of science is to provide explicit and systematic accounts of the theories and explanatory strategies used to one’s advantage in the science. Another common goal is to construct philosophically illuminating analyses or explanations of central theoretical intellections. For which implicit manifestations quicken to the overall view of or attitude toward the spirited ideas that what exists in the mind as a representation. As of something comprehended or a formulation of a plan, which is not to assume of any constituent standard as invoked in one or another science. In the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and there has been a great deal of work on the structure of evolutionary theory and on such crucial conceptualizations as existing or dealing with what exists only in the mind. By introducing ‘teleological considerations’, this account views beliefs as states with biological purpose and analyses their truth conditions specifically as those conditions that they are biologically supposed to covary to additional means of or by virtue of or through a detailed and complete manner, as, perhaps, in spite of or with the interaction of meaning intellection to deliberate our reflective cogitation of ruminating the act or process of thinking,

































Similarly, teleological theory holds that ‘r’ represents ‘x’ if it is r’s function to implicate (i.e., covary with) ‘x’, teleological theories to be unlike or distinctly disagreed in its nature, form, or characteristics as only to differ in opinion to concede depending on the theory of functions they import. Perhaps the most important distinction is that between historical theories of functions and a-historical theories. Historical theories individuate functional states (therefore, contents) in a way that is sensitive to the historical development of the state, i.e., to factors such as the way the state was ‘learned’, or the way it evolved. A historical theory might hold that the function of ‘r’ is to implicate on or upon the purity of the form ‘x’, only if the capacity to token ‘r’ was developed (selected, learned). Because it gives to implicate the realistic prevalence held to convey (as an idea) to the mind, as this signifies the eloquence or significant manifestation for appearing the objectification in the forming implication of ‘x’. Thus, a state physically indistinguishable from ‘r’ (physical states being a-historical) but lacking r’s historical origins would not represent ‘x’ according to historical theories.

The American philosopher of mind (1935-) Jerry Alan Fodor, is known for resolute ‘realism’ about the nature of mental functioning, taking the analogy between thought and computation seriously. Fodor believes that mental representations should be conceived as individual states with their own identities and structures, like formulae transformed by processes of computation or thought. His views are frequently contrasted with those of ‘Holist s’ such as the American philosopher Herbert Donald Davidson (1917-2003), or ‘instrumentalists’ about mental ascription. Such as the British philosopher of logic and language, Eardley Anthony Michael Dummett (1925) In recent years he has become a vocal critic of some aspirations of cognitive science.

Nonetheless, a suggestion extrapolating the solution of teleology is continually to enquiry by points of owing to ‘causation’ and ‘content’, and ultimately a fundamental appreciation is to be considered, is that: we suppose that there is a causal path from A’s to ‘A’s’ and a causal path from B’s to ‘A’s’, and our problem is to find some difference between B-caused ‘A’s’ and A-caused ‘A’s’ in virtue of which the former but not the latter misrepresented. Perhaps, the two paths differ in their counter-factual properties. In particular, even if alienable positions in the group of ‘A and B’s’, are causally effective to both, perhaps each can assume that only in the finding measure that A’s would cause ‘A’s’ in - as one can say -, ‘optimal circumstances’. We could then hold that a symbol expresses its ‘optimal property’, viz., the property that would causally control its tokening in optimal circumstances. Correspondingly, when the tokening of a symbol is causally controlled by properties other than its optimal property, the tokens that come about being ipso facto wild.

Suppose, that this story about ‘optimal circumstances’ is proposed as part of a naturalized semantics for mental representations. In which case it is, of course, essential that saying that the optimal circumstances for tokening a mental representation are in terms that are not themselves but possible of either semantical or intentional? (It would not do, for example, to identify the optimal circumstances for tokening a symbol as those in which the tokens are true, that would be to assume precisely the semantical notion that the theory is supposed to naturalize.) Befittingly, the suggestion - to put it concisely - is that appeals to ‘optimality’ should be buttressed by appeals to ‘teleology’: Optimal circumstances are the ones in which the mechanisms that mediate symbol tokening are functioning ‘as they are at the present timer. With mental representations, these would be paradigmatically circumstances where the mechanisms of belief fixation are functionally accepted or advanced as true or real based on less than conclusive evidence that they are supposed are accepted or advanced as true or real based on less than conclusive evidence, such that to understand or assume of the categories availably warranted, the position assumed or a point made especially in so, to or into that place that in consequence of that for this or that reason could be that thing, or circumstance as deprived to form an idea of something in the mind.

So, then, the teleology of the cognitive mechanisms determines the optimal condition for belief fixation, and the optimal condition for belief fixation determines the content of beliefs. So the story goes.

To put this objection in slightly other words, the teleology story perhaps strikes one as plausible in that it understands one normative notion - truth - of another normative notion - optimality. However, this appearance if it is spurious there is no guarantee that the kind of optimality that teleology reconstructs relates to the kind of optimality that the explication of ‘truth’ requires. When mechanisms of repression are working ‘optimally’ - when they are working ‘as they are supposed to’ - what they deliver are likely to be ‘falsehoods’.

Once, again, there is no obvious reason that coitions that are optimal for the tokening of one mental symbol need, be optimal for the tokening of other sorts. Perhaps the optimal conditions for fixing beliefs about very large objects, are different from the optimal conditions for fixing beliefs about very small ones, are different from the optimal conditions for fixing beliefs sights. Nevertheless, this raises the possibility that if we are to say which conditions are optimal for the fixation of a belief, we should know what the content of the belief is - what presents of itself to be a belief. Our explication of content would then require a notion of optimality, whose explication in turn requires a notion of content, and the resulting pile would clearly be unstable.

Teleological theories hold that ‘r’ represents ‘x’ if it is r’s function to give evidence of or serve as grounds for a valid or reasonable inference. If only to announce the indicative approval for which of indicatory exhibitions assembled through (i.e., covary with) ‘x’. Teleological theories differ, depending on the theory of functions they import. Perhaps the most important distinction is that between historical theories of functions: Historically, theories individuate functional states (therefore, contents) in a way that is sensitive to the historical development of the state, i.e., to factors such as the way the state was ‘learned’, or the way it evolved. A historical theory might hold that the function of ‘r’ is to implicate or manifest the associative tacit implied in being such in essential character that suggests the intimation of something that is an outward manifestation of something else or that which is indicative, only to suggest the designation that by its very significative indications to which evoke the aptitudinal form ‘x’, only if the capacity to token ‘r’ was developed (selected, learned) because it serves as grounded for a valid or reasonable inference as to characterize ‘x’. Thus, a state physically indistinguishable from ‘r’ (physical states being a-historical), but lacking r’s historical origins would not represent ‘x’ according to historical theories.

Just as functional role theories hold that r’s representing ‘x’ is grounded in the functional role ‘r’ has in the representing system, i.e., on the relations imposed by specified cognitive processes between ‘r’ and other representations in the system’s repertoire. Functional role theories take their cue from such common-sense ideas as that people cannot believe that cats are furry if they do not know that cats are animals or that fur is like hair.

That being said, that nowhere is the new period of collaboration between philosophy and other disciplines more evident than in the new subject of cognitive science. Cognitive science from its very beginning has been ‘interdisciplinary’ in character, and is in effect the joint property of psychology, linguistics, philosophy, computer science and anthropology. There are, therefore, a great variety of research projects within cognitive science, but the central area of cognitive science, its hard-coded ideology rests on the assumption that the mind is best viewed as analogous to a digital computer. The basic idea behind cognitive science is that recent developments in computer science and artificial intelligence have enormous importance for our conception of human beings. The basic inspiration for cognitive science went something like this: Human beings do information processing. Computers are designed precisely do information processing. Therefore, one way to study human cognition - perhaps the best way to study it - is to study it as a matter of computational information processing. Some cognitive scientists think that the computer is just a metaphor for the human mind: Others think that the mind is literally a computer program. Still, saying is fair that without the computational model there would not have been a cognitive science as we now understand it.

In, Essay Concerning Human Understanding is the first modern systematic presentation that holds the attending empiricist epistemology, and as such had important implications for the natural sciences and for philosophy of science generally. Like his predecessor, Descartes, the English philosopher (1632-1704) John Locke began his account of knowledge from the conscious mind aware of ideas. Unlike Descartes, however, he was concerned not to build a system based on certainty, but to identify the mind’s scope and limits. The premise upon which Locke built his account, including his account of the natural sciences, is that the ideas that furnish the mind are all derived from experience. He thus, totally rejected any kind of innate knowledge. In this he consciously opposes Descartes, who had argued that coming to knowledge of fundamental truths about the natural world through reason alone is possible. Descartes (1596-1650) had argued, that we can come to know the essential nature of both ‘minds’ and ‘matter’ by pure reason. John Locke accepted Descartes’s criterion of clear and distinct ideas as the basis for knowledge, but denied any source for them other than experience. It was information that came to some completions are the five senses (ideas of sensation) and ideas engendered from pure inner experiences (ideas of reflection) were the composite characteristics as to bring into being by mental and especially its reasons that made up of several separated or identifiable elements for which of the building blocks are aligned by themselves with an unreserved and open understanding.

Locke concerted his commitment to ‘the new way of ideas’ with the native espousal of the ‘corpuscular philosophy’ of the Irish scientist (1627-92) Robert Boyle. This, in essence, was an acceptance of a revised, more sophisticated account of matter and its properties advocated by the ancient atomists and recently supported by Galileo (1564-1642) and Pierre Gassendi (1592-1655). Boyle argued from theory and experiment that there were powerful reasons to justify some kind of corpuscular account of matter and its properties. He called the latter qualities, which he distinguished as primary and secondary. The distinction between primary and secondary qualities may be reached by two different routes: Either from the nature or essence of matter or from the nature and essence of experience, though practising these have a tendency to run-together. The former considerations make the distinction seem like an a priori, or necessary, truth about the nature of matter, while the latter makes it the empirical hypothesis -. Locke, too, accepted this account, arguing that the ideas we have of the primary qualities of bodies resemble those qualities as they are in the subject, whereas the ideas of the secondary qualities, such as colour, taste, and smell, do not resemble their causes in the object.

There is no strong connection between acceptance of the primary-secondary quality distinction and Locke’s empiricism and Descartes had also argued strongly for universal acceptance by natural philosophers, and Locke embraced it within his more comprehensive empirical philosophy. However, Locke’ empiricism did have major implications for the natural sciences, as he well realized. His account begins with an analysis of experience. All ideas, he argues, are either simple or complex. Simple ideas are those like the red of a particular rose or the roundness of a snowball. Complicated and complex ideas, our ideas of the rose or the snowball, are combinations of simple ideas. We may create new complicated and complex ideas in our imagination - a parallelogram, for example. Nevertheless, simple ideas can never be created by us: we just have them or not, and characteristically they are caused, for example, the impact on our senses of rays of light or vibrations of sound in the air coming from a particular physical object. Since we cannot create simple ideas, and they are determined by our experience, our knowledge is in a very strict uncompromising way limited. Besides, our experiences are always of the particular, never of the general. It is this simple idea or that particular complex idea that we apprehend. We never in that sense apprehend a universal truth about the natural world, but only particular instances. It follows from this that all claims to generality about that world - for example, all claims to identity what was then beginning to be called the laws of nature - must to that extent go beyond our experience and thus be less than certain.

The Scottish philosopher, historian, and essayist, (1711-76) David Hume, whose famous discussion appears in both his major philosophical works, the ‘Treatise’ (1739) and the ‘Enquiry’(1777). The distinction is couched as the apprehensive intellection for existing or dealing with what exists only in the mind, that the ideational intellection of causality, so that which is responsible for an effect, under which the considerations that we are use linguistically communicating of the laws, Hume contends, involves three ideas:

1. That there should be a regular concomitance between events

of the type of cause and those of the type of effect.

2. That the cause event should be contiguous with the effect event.

3. That the cause event should require the effect event.

The tenets (1) and (2) occasion no differently for Hume, since he believes that there are patterns of sensory impressions non-problematically related to the idea of regularity concomitance and of contiguity. Nonetheless, the third requirement is deeply problematic, in that the idea of necessarily that figures in it seems to have no sensory impression correlated with it. However, carefully and attentively we scrutinize a causal process, we do not seem to observe anything that might be the observed correlate of the idea of necessity. We do not observe any kind of activity, power, or necessitation. All we ever observe is one event following another, which is logically independent of it. Nor is this logically necessary, since, as, Hume observes, one can jointly assert the existence of the cause and a denial of the existence of the effect, as specified in the causal statement or the law of nature, without contradiction. What, then, are we to make of the seemingly central notion of necessity that is deeply embedded in the very idea of causation, or lawfulness? To this query, Hume gives an ingenious and telling story. There is an impression corresponding to the idea of causal necessity, but it is a psychological phenomenon: Our exception that is similar to those we have already observed to be correlated with the cause-type of events will occur in this instance too. Where does that impression come from? It is created as a kind of mental habit by the repeated experience of regular concomitance between events of the type of effect and the occurring of events of the type of cause. Then, the impression that corresponds to the idea of regular concomitance - the law of nature then asserts nothing but the existence of the regular concomitance.

At this point in our narrative, the question at once arises about whether this factor of life in nature, thus interpreted, corresponds to anything that we observe in nature. All philosophy is an endeavour to obtain a self-consistent understanding of things observed. Thus, its development is guided in two ways, one is demand for coherent self-consistency, and the other is the elucidation of things observed. With our direct observations how are we to conduct such comparisons? Should we turn to science? No. There is no way in which the scientific endeavour can detect the aliveness of things: Its methodology rules out the possibility of such a finding. On this point, the English mathematician and philosopher (1861-1947) Alfred Whitehead, comments, That science can find no individual enjoyment in nature, as science can find no creativity in nature, it finds mere rules of succession. These negations are true of natural science. They are inherent in its methodology. The reason for this blindness of physical science lies in the fact that such science only deals with half the evidence provided by human experience. It divides the seamless coat - or, to change the metaphor into a happier form, it examines the coat, which is superficial, and neglects the body that is fundamental.

Whitehead claims that the methodology of science makes it blind to a fundamental aspect of reality, namely, the primacy of experience, it neglected half the evidence. Working within Descartes’ dualistic framework reference, of matter and mind as separate and incommensurate, science limits itself to the study of objectivised phenomena, neglecting the subject and the mental events that are his or her experience.

Both the adoption of the Cartesian paradigm and the neglect of mental events are reason enough to suspect ‘blindness’, but there is no need to rely on suspicions. This blindness is evident. Scientific discoveries, impressive as they are, are fundamentally superficial. Science can express regularities observed in nature, but it cannot explain the reasons for their occurrence. Consider, for example, Newton’s law of gravity. It shows that such apparently disparate phenomena as the falling of an apple and the revolution of the earth around the sun are aspects of the same regularity - gravity. According to this law the gravitational attraction between two objects deceases in proportion to the square of the distance between them. Why is that so? Newton could not provide an answer. Simpler and becalmed, is why, does the continuum of space have three dimensions? Why is time one-dimensional? Whitehead notes, ‘None of these laws of nature gives the slightest evidence of necessity. They are [merely] the modes of procedure that within the scale of observation does in fact prevail’.

This analysis reveals that the capacity of science to fathom the depths of reality is limited. For example, if reality is, in fact, made up of discrete units, and these units have the fundamental character in being ‘ the pulsing throbs of experience’, then science may be unable to discover the discreteness: But it has no access to the subjective side of nature since, as the Austrian physicist(1887-1961) Erin Schrödinger points out, we ‘exclude the subject of cognizance from the domain of nature that we endeavour to understand’. It follows that to find ‘the elucidation of things observed’ in relation to the experiential or aliveness aspect, we cannot rely on science, we need to look elsewhere.

If, instead of relying on science, we rely on our immediate observation of nature and of ourselves, we find, first, that this [i.e., Descartes’] stark division between mentality and nature has no ground in our fundamental observation. We find ourselves living within nature. Secondly, in that we should conceive mental operations as among the factors which make up the constitution of nature, and thirdly, that we should reject the notion of aversion to work while following the processes of a hidden nature, every factor that makes a difference, and that difference can only be expressed in terminological factors of the individualized character of that factor.

Whitehead continues to analyse our experiences overall, and our observations of nature in particular, and ends abruptly with ‘mutual immanence’ as a central theme. This mutual immanence is as, much ado as its obviousness, that, I am a part of the universe, and, since I experience the universe, the experienced universe is part of me. Whitehead gives an example, ‘I am in the room, and the room is an item in my present experience. Nevertheless, my present experience is what I am now’. A generalization of this relationship to the case of any actual occasions yields the conclusion that ‘the world is included within the occasion in one sense, and the occasion is included in the world in another sense’. The idea that each actual occasion appropriates its universe follows naturally from such considerations.

The description of an actual entity for being a distinct unit is, therefore, only one part of the story. The other, complementary part is this: The very nature of each actual entity is one of interdependence with all the other actual entities in the universe. Every existent entity agrees or subscribes of a series of actions, operations or motions involved in the accomplishments of an ending method or its operative processing under which its prehending or appropriating of all other actualized entities in the creating of new entities, in that out of them all, namely, the resultant amounts of themselves.

There are two general strategies for distinguishing laws from accidentally true generalizations. The first stands by Hume’s idea that causal connections are mere constant conjunctions, and then seeks to explain why some constant conjunctions are better than others. That is, this first strategy accepts the principle that causation involves nothing more than certain events always happening with certain others, and then seeks to explain why some such patterns - the ‘laws’ - matter more than others - the ‘accidents’ -. The second strategy, by contrast, rejects the Humean presupposition that causation involves nothing more than happen in reserve from a casual co-occurrence, and instead postulates their relationship by its owing ‘necessitation’, a kind of make secure or binding, for which links events connected by law, but not those events (like having a screw in my desk and being made of copper) that are only accidentally conjoined.

There are several versions of the first Human strategy. The most successful, originally proposed by the Cambridge mathematician and philosopher F.P. Ramsey (1903-30), and revived separately the American philosopher David Lewis (1941-2002), who hold that laws are those true generalizations that can be fitted into an ideal system of knowledge. The thought is, that, the laws are those patterns explicated as to basic science, either as fundamental principles themselves, or as consequences of those principles, while accidents, although true, have no such explanation. Thus, ‘All water at standard pressure boils at 1000 C’ is a consequence of the laws governing molecular bonding: But the fact that ‘All the screws in my desk are copper’ is not part of the deductive structure of any satisfactory science. Frank Plumpton Ramsey (1903-30), neatly encapsulated this idea by saying that laws are ‘consequences of those propositions that we should take as axioms if we knew everything and organized it as simply as possible in a deductive system’.

Advocates of the alternative non-Humean strategy object that the difference between laws and accidents is not a ‘linguistic’ matter of deductive systematization, but a ‘metaphysical’ contrast between the kind of links they report. They argue that there is a link in nature between being at 1000 C and boiling, but not between being ‘in my desk’ and being ‘made of copper’, and that this is nothing to do with how the description of this link may fit into theories. According to the forth-right Australian D.M. Armstrong (1983), the most prominent defender of this view, the real difference between laws and accidentals, is simply that laws report relationships of natural ‘necessitation’, while accidents only report that two types of events happen to occur together.

Armstrong’s view may seem intuitively plausible, but it is arguable that the notion of necessitation simply restates the problem, than solving it. Armstrong says that necessitation involves something more than constant conjunction: If two events are related by necessitation, then it follows that they are constantly conjoined, but two events can be constantly conjoined without being related by necessitation, as when the constant conjunction is just a matter of an accident. So necessitation is a stronger relationship than constant conjunction. However, Armstrong and other defenders of this view say ver y little about what this extra strength amounts to, except that it distinguishes laws from accidents. Armstrong’s critics argue that a satisfactory account of laws ought to cast more light than this on the nature of laws.

Hume said that the earlier of two causally related events is always the cause, and the later effect. However, there are several objections to using the earlier-later ‘arow of time’ to analyse the directional ‘arrow of causation’. For a start, it seems in principle, possible that some causes and effects could be simultaneous. That more, in the idea that time is directed from ‘earlier’ too ‘later’ itself stands in need of philosophical explanation -. One of the most popular explanations is that the idea of ‘movement’ from earlier later to depend on the fact that cause-effect pairs always have a time, and explain ‘earlier’ as the direction in which causes lie, and ‘later’ as the direction of effects, that we will clearly need to find some account of the direction of causation that does not itself assume the direction of time.

Several accounts have been proposed, David Lewis (1979) has argued that the asymmetry of causation derives from an ‘asymmetry of over-determination’. The over-determination of present events by past events - consider a person who dies after simultaneously being shot and struck by lightning - is a very rare occurrence, by contrast, the multiple ‘over-determination’ of present events by future events is absolutely normal. This is because the future, unlike the past, will always contain multiple traces of any present event. To use Lewis’s example, when the president presses the red button in the White House, the future effects do not only include the dispatch of nuclear missiles, but also the fingerprint on the button, his trembling, the further depletion of his gin bottle, the recording of the button’s click on tape, he emission of light waves bearing the image of his action through the window, the warnings of the wave from the passage often signal current, and so on, and so on, and on.

Lewis relates this asymmetry of over-determination to the asymmetry of causation as follows. If we suppose the cause of a given effect to have been absent, then this implies the effect would have been absent too, since (apart from freak -like occurrence in the lightning-shooting case) there will not be any other causes left to ‘fix’ the effect. By contrast, if we suppose a given effect of some cause to have been absent, this does not imply the cause would have been absent, for there are still all the other traces left to ‘fix’ the causes. Lewis argues that these counterfactual considerations suffice to show why causes are different from effects.

Other philosophers appeal to a probabilistic variant of Lewis’s asymmetry. Following, the philosopher of science and probability theorists, Hans Reichenbach (1891-1953), they note that the different causes of any given type of effect are normally probabilistically independent of each other, by contrast, the different effects of any given type of cause are normally probabilistically correlated. For example, both obesity and high excitement can cause heart attacks, but this does not imply that fat people are more likely to get excited than thin ones: Its facts, that both lung cancer and Nicotine-stained fingers can result from smoking does imply that lung cancer is more likely among people with nicotine-stained fingers. So this account distinguishes effects from causes by the fact that the former, but not the latter are probabilistically dependent on each other.

However, there is another course of thought in philosophy of science, the tradition of ‘negative’ or ‘eliminative induction’. From the English diplomat and philosopher Francis Bacon (1561-1626) and in modern time the philosopher of science Karl Raimund Popper (1902-1994), we have the idea of using logic to bring falsifying evidence to bear on hypotheses about what must universally be the case that many thinkers accept in essence his solution to the problem of demarcating proper science from its imitators, namely that the former results in genuinely falsifiable theories whereas the latter do not. Although falsely, allowed many people’s objections to such ideologies as psychoanalysis and Marxism.

Hume was interested in the processes by which we acquire knowledge: The processes of perceiving and thinking, of feeling and reasoning. He recognized that much of what we claim to know derives from other people secondhand, thirdhand or worse: Moreover, our perceptions and judgements can be distorted by many factors - by what we are studying, and by the very act of study itself the main reason, however, behind his emphasis on ‘probabilities and those other measures of evidence on which life and action entirely depend’ is this:

Evidently, all reasoning concerning ‘matter of fact’

are founded on the relation of cause and effect, and that?

we can never infer the existence of one object from

another unless they are connected, either mediately

or immediately.

When we apparently observe a whole sequence, say of one ball hitting another, what do we observe? In the much commoner cases, when we wonder about the unobserved causes or effects of the events we observe, what precisely are we doing?

Hume recognized that a notion of ‘must’ or necessity is a peculiar feature of causal relation, inference and principles, and challenges us to explain and justify the notion. He argued that there is no observable feature of events, nothing like a physical bond, which can be properly labelled the ‘necessary connection’ between a given cause and its effect: Events are simply merely to occur, and there is in ‘must’ or ‘ought’ about them. However, repeated experience of pairs of events sets up the habit of expectation in us, such that when one of the pair occurs we inescapably expect the other. This expectation makes us infer the unobserved cause or unobserved effect of the observed event, and we mistakenly project this mental inference onto the events themselves. There is no necessity observable in causal relations, all that can be observed is regular sequence, there is proper necessity in causal inferences, but only in the mind. Once we realize that causation is a relation between pairs of events. We also realize that often we are not present for the whole sequence that we want to divide into ‘cause’ and ‘effect’. Our understanding of the casual relation is thus intimately linked with the role of the causal inference cause only causal inferences entitle us to ‘go beyond what is immediately present to the senses’. Nevertheless, now two very important assumptions emerge behind the causal inference: The assumptions that like causes, in ‘like circumstances, will always produce like effects’, and the assumption that ‘the course of nature will continue uniformly the same’ - or, briefly that the future will resemble the past. Unfortunately, this last assumption lacks either empirical or a priori proof, that is, it can be conclusively established neither by experience nor by thought alone.

Hume frequently endorsed a standard seventeenth-century view that all our ideas are ultimately traceable, by analysis, to sensory impressions of an internal or external kind. In agreement, he claimed that all his theses are based on ‘experience’, understood as sensory awareness with memory, since only experience establishes matters of fact. Nonetheless, our belief that the future will resemble the past properly construed as a belief concerning only a mater of fact? As the English philosopher Bertrand Russell (1872-1970) remarked, earlier this century, the real problems that Hume asserts to are whether future futures will resemble future pasts, in the way that past futures really did resemble past pasts. Hume declares that ‘if . . . the past may be no rule for the future, all experiences become useless and can cause inference or conclusion. Yet, he held, the supposition cannot stem from innate ideas, since there are no innate ideas in his view nor can it stems from any abstract formal reasoning. For one thing, the future can surprise us, and no formal reasoning seems able to embrace such contingencies: For another, even animals and unthinkable people conduct their lives as if they assume the future resembles the past: Dogs return for buried bones, children avoid a painful fire, and so forth. Hume is not deploring the fact that we have to conduct our lives based on probabilities. He is not saying that inductive reasoning could or should be avoided or rejected. Alternatively, he accepted inductive reasoning but tried to show that whereas formal reasoning of the kind associated with mathematics cannot establish or prove matters of fact, factual or inductive reasoning lacks the ‘necessity’ and ‘certainty’ associated with mathematics. His position, therefore clear; because ‘every effect is a distinct event from its cause’, only investigation can settle whether any two particular events are causally related: Causal inferences cannot be drawn with the force of logical necessity familiar to us from deductivity, but, although they lack such force, they should not be discarded. From causation, inductive inferences are inescapable and invaluable. What, then, makes ‘experience’ the standard of our future judgement? The answer is ‘custom’, it is a brute psychological fact, without which even animal life of a simple kind would be mostly impossible. ‘We are determined by custom to suppose the future conformable to the past’ (Hume, 1978), nevertheless, whenever we need to calculate likely events we must supplement and correct such custom by self-conscious reasoning.

Nonetheless, the problem that the causal theory of reference will fail once it is recognized that all representations must occur under some aspect or that the extentionality of causal relations is inadequate to capture the aspectual character of reference. The only kind of causation that could be adequate to the task of reference is intentional causal or mental causation, but the causal theory of reference cannot concede that ultimately reference is achieved by some met device, since the whole approach behind the causal theory was to try to eliminate the traditional mentalism of theories of reference and meaning in favour of objective causal relations in the world, though it is at present by far the most influential theory of reference, will be a failure for these reasons.

If mental states are identical with physical states, presumably the relevant physical states are various sorts of neural states. Our concepts of mental states such as thinking, sensing, and feeling are of course, different from our concepts of neural states, of whatever sort. Still, that is no problem for the identity theory. As J.J.C. Smart (1962), who first argued for the identity theory, emphasized, the requisite identities do not depend on understanding concepts of mental states or the meanings of mental terms. For ‘a’ to be the identical with ‘b’, ‘a’, and ‘b’ must have the same properties, but the terms or the things in themselves are ‘a’ and ‘b’, and need not mean the same. Its principal means by measure can be accorded within the indiscernibility of identicals, in that, if ‘A’ is identical with ‘B’, then every property that ‘A’ has ‘B’, and vice versa. This is, sometimes known as Leibniz’ s Law.

Nevertheless, a problem does seem to arise about the properties of mental states. Suppose pain is identical with a certain firing of c-fibres. Although a particular pain is the very same as a neural-firing, we identify that state in two different ways: As a pain and as neural-firing. That the state will therefore have certain properties in virtue of which we identify it as pain and others in virtue of which we identify it as an excitability of neural firings. The properties in virtue of which we identify it as a pain will be mental properties, whereas those in virtue of which ewe identify it as neural excitability firing, will be physical properties. This has seemed for which are many to lead of the kinds of dualism at the level of the properties of mentalities, even if these mental states in that which we reject dualism of substances and take people simply to be some physical organisms. Those organisms still have both mental and physical states. Similarly, even if we identify those mental states with certain physical states, those states will, nonetheless have both mental and physical properties. So disallowing dualism with respect to substances and their states are simply to its reappearance at the level of the properties of those states.

All and all, a problem faced in providing a reductive naturalistic analysis especially forwarded by its internal representation has led many to doubt that this task to achieved or necessary. Although a story can be told about some words or signs what were learned via association of other causal connections with their referents, there is no reason to believe ht the ‘stand-for’ relation, or semantic notions in general, can be reduced to or eliminated in favour of non-semantic terms.

Although linguistic and pictorial representations are undoubtedly the most prominent symbolic forms we employ, the ranges of representational systems human understand and regularly use is surprisingly large. Sculptures, maps, diagrams, graphs. Gestures, music nation, traffic signs, gauges, scale models, and tailor’s swatches are but a few of the representational systems that play a role in communication, though, and the guidance of behaviour. Even, the importance and prevalence of our symbolic activities has been taken as a hallmark of human.

What is it that distinguishes items that serve as representations from other objects or events? And what distinguishes the various kinds of symbols from each other? As for the first question, there has been general agreement that the basic notion of a representation involves one thing’s ‘standing for’, ‘being about’, referring to or denoting’ something else. The major debates have been over the nature of this connection between a reorientation and that which it represents. As for the second question, perhaps, the most famous and extensive attempt to organize and differentiate among alternative forms of representation is found in the works of the American philosopher of science Charles Sanders Peirce (1839-1914) who graduated from Harvard in 1859, and apart from lecturing at John Hopkins university from 1879 to 1884, had almost no teaching, nonetheless, Peirce’s theory of signs is complex, involving a number of concepts and distinctions that are no longer paid much heed. The aspects of his theory that remains influential and ie widely cited is his division of signs into Icons, Indices and Symbols. Icons are the designs that are said to be like or resemble the things they represent, e.g., portrait painting. Indices are signs that are connected in their objects by some causal dependency, e.g., smoke as a sign of fire. Symbols are those signs that are used and related to their object by virtue of use or associations: They a arbitrary labels, e.g., the word ‘table’. This tripartite division among signs, or variants of this division, is routinely put forth to explain differences in the way representational systems are thought to establish their links to the world. Further, placing a representation in one of the three divisions has been used to account for the supposed differences between conventional and non-conventional representations, between representations that do and do not require learning to understand, and between representations, like language, that need to be read, and those which do not require interpretation. Some theorbists, moreover, have maintained that it is only the use of symbols that exhibits or indicates the presence of mind and mental states.

Over the years, this tripartite division of signs, although often challenged, has retained its influence. More recently, an alterative approach to representational systems (or as he calls them ‘symbolic systems’) has been put forth by the American philosopher Nelson Goodman (1906-98) whose classical problem of ‘induction’ is often phrased in terms of finding some reason to expect that nature is uniform, in Fact, Fiction, and Forecast (1954) Goodman showed that we need in addition some reason for preferring some uniformities to others, for without such a selection the uniformity of nature is vacuous, yet Goodman (1976) has proposed a set of syntactic and semantic features for categorizing representational systems. His theory provided for a finer discrimination among types of systems than a philosophy of science and language as partaken to and understood by the categorical elaborations as announced by Peirce. What also emerges clearly is that many rich and useful systems of representation lack a number of features taken to be essential to linguistic or sentential forms of representation, e.g., discrete alphabets and vocabularies, syntax, logical structure, inferences rules, compositional semantics and recursive e compounding devices.

As a consequence, although these representations can be appraised for accuracy or correctness. It does not seem possible to analyse such evaluative notion along the lines of standard truth theories, geared as they are to the structural founded sentential systems.

If we resist the equation of the justificatory and explanatory work of reason-giving, we must look for a connection between reasons and action/belief, that whatever cases of genuine reason is then forwarded by explantation, which is absent otherwise to mere rationalizations (a connection that is present when enacted on the better of judgements, and not when failed). Classically suggested, in this context is that of causality. In cases of genuine explanation, the reason-providing intentional states are applicable stimulations whose cause of holding to belief/actions for which they also provide for reasons. This position, in addition, seems to find support from considering the conditional and counter-factuals that our reason-providing explanations admit as valid, only for which make parallel those in cases of other causal explanations. In general terms, where my reasons explains my action, then the presence towards the future is such that for reasons are, however, in those circumstances, necessary for the action and, at least, made probable for its occurrence. These conditional links can be explained if we accept that the reason-giving link is also a causal one. Any alternative account would therefore also need to accommodate them.

The defence of the view that reasons are causes for which seems arbitrary, least of mention, ‘Why does explanation require citing the cause of the cause of a phenomenon but not the next link in the chain of causes? Perhaps what is not generally true of explanation is true only of mentalistic explanation: Only in giving the latter type are we obliged to give the cause of as cause. However, this too seems arbitrary. What is the difference between mentalistic and non-mentalistic explanation that would justify imposing more stringent restrictions on the former? The same argument applies to non-cognitive mental stares, such as sensations or emotions. Opponents of behaviourism sometimes reply that mental states can be observed: Each of us, through ‘introspection’, can observe at least some mental states, namely our own, least of mention, those of which we are conscious.

To this point, the distinction between reasons and causes is motivated in good part by a desire to separate the rational from the natural order. However, its probable traces are reclined of a historical coefficient of reflectivity as Aristotle’s similar (but not identical) distinction between final and efficient cause, engendering that (as a person, fact, or condition) which proves responsible for an effect. Recently, the contrast has been drawn primarily in the domain or the inclining inclinations that manifest some territory by which attributes of something done or effected are we to engage of actions and, secondarily, elsewhere.

Many who have insisted on distinguishing reasons from causes have failed to distinguish two kinds of reason. Consider its reason for sending a letter by express mail. Asked why id so, I might say I wanted to get it there in a day, or simply, to get it there in as day. Strictly, the reason is repressed by ‘to get it there in a day’. But what this express to my reason only because I am suitably motivated: I am in a reason state, as wanting to get the letter there in a day. It is reason states - especially wants, beliefs and intentions - and not reasons strictly so called, that are candidates for causes. The latter are abstract contents of propositional altitudes: The former are psychological elements that play motivational roles.

If reason states can motivate, however, why (apart from confusing them with reasons proper) deny that they are causes? For one can say that they are not events, at least in the usual sense entailing change, as they are dispositional states (this contrasts them with occurrences, but not imply that they admit of dispositional analysis). It has also seemed to those who deny that reasons are causes that the former justify as well as explain the actions for which they are reasons, whereas the role of causes is at most to explain. As other claim is that the relation between reasons (and for reason states are often cited explicitly) and the actions they explain is non-contingent, whereas the relation causes to their effects is contingent. The ‘logical connection argument’ proceeds from this claim to the conclusion that reasons are not causes.

These arguments are inconclusive, first, even if causes are events, sustaining causation may explain, as where the [states of] standing of a broken table is explained by the (condition of) support of staked boards replacing its missing legs. Second, the ‘because’ in ‘I sent it by express because I wanted to get it there in a day, so in some semi-causal explanation would at best be construed as only rationalizing, than justifying action. And third, if any non-contingent connection can be established between, say, my wanting something and the action it explains, there are close causal analogism such as the connection between brining a magnet to iron filings and their gravitating to it: This is, after all, a ‘definitive’ connection, expressing part of what it is to be magnetic, yet the magnet causes the fillings to move.

There I then, a clear distinction between reasons proper and causes, and even between reason states and event causes: But the distinction cannot be used to show that the relations between reasons and the actions they justify is in no way causal. Precisely parallel points hold in the epistemic domain (and indeed, for all similarly admit of justification, and explanation, by reasons). Suppose my reason for believing that you received it today is that I sent it by express yesterday. My reason, strictly speaking, is that I sent it by express yesterday: My reason state is my believing this. Arguably reason justifies the further proposition I believe for which it is my reason and my reason state - my evidence belief - both explains and justifies my belief that you received the letter today. I an say, that what justifies that belief is [in fact] that I sent the letter by express yesterday, but this statement expresses my believing that evidence proposition, and you received the letter is not justified, it is not justified by the mere truth of the proposition (and can be justified even if that proposition is false).

Similarly, there are, for belief for action, at least five main kinds of reason (1) normative reasons, reasons (objective grounds) there are to believe (say, to believe that there is a green-house-effect): (2) Person-relative normative reasons, reasons for [say] me to believe, (3) subjective reasons, reasons I have to believe (4) explanatory reasons, reasons why I believe, and (5) motivating reasons for which I believe. Tenets of (1) and (2) are propositions and thus, not serious candidates to be causal factors. The states corresponding to (3) may not be causal elements. Reasons why, tenet (4) are always (sustaining) explainers, though not necessarily even prima facie justifier, since a belief can be casually sustained by factors with no evidential value. Motivating reasons are both explanatory and possess whatever minimal prima facie justificatory power (if any) a reason must have to be a basis of belief.

Current discussion of the reasons-causes issue has shifted from the question whether reason state can causally explain to the perhaps, deeper questions whether they can justify without so explaining, and what kind of causal states with actions and beliefs they do explain. ‘Reliabilist’ tend to take as belief as justified by a reason only if it is held ast least in part for that reason, in a sense implying, but not entailed by, being causally based on that reason. ‘Internalists’ often deny this, as, perhaps, thinking we lack internal access to the relevant causal connections. But Internalists need internal access to what justified - say, the reason state - and not to the (perhaps quite complex) relations it bears the belief it justifies, by virtue for which it does so. Many questions also remain concerning the very nature of causation, reason-hood, explanation and justification.

Nevertheless, for most causal theorists, the radical separation of the causal and rationalizing role of reason-giving explanations is unsatisfactory. For such theorists, where we can legitimately point to an agent’s reasons to explain a certain belief or action, then those features of the agent’s intentional states that render the belief or action reasonable must be causally relevant in explaining how the agent came to believe or act in a way which they rationalize. One way of putting this requirement is that reason-giving states not only cause but also causally explain their explananda.

The explanans/explanandum are held of a wide currency of philosophical discoursing because it allows a certain succinctness which is unobtainable in ordinary English. Whether in science philosophy or in everyday life, one does often offers explanation s. the particular statement, laws, theories or facts that are used to explain something are collectively called the ‘explanans’, and the target of the explanans - the thing to be explained - is called the ‘explanandum’. Thus, one might explain why ice forms on the surface of lakes (the explanandum) in terms of the special property of water to expand as it approaches freezing point together with the fact that materials less dense than liquid water float in it (the explanans). The terms come from two different Latin grammatical forms: ‘Explanans’ is the present participle of the verb which means explain: And ‘explanandum’ is a direct object noun derived from the same verb.

The assimilation in the likeness as to examine side by side or point by point in order to bring such in comparison with an expressed or implied standard where comparative effects are both considered and equivalent resemblances bound to what merely happens to us, or to parts of us, actions are what we do. My moving my finger is an action to be distinguished from the mere motion of that finger. My snoring likewise, is not something I ‘do’ in the intended sense, though in another broader sense it is something I often ‘do’ while asleep.

The contrast has both metaphysical and moral import. With respect to my snoring, I am passive, and am not morally responsible, unless for example, I should have taken steps earlier to prevent my snoring. But in cases of genuine action, I am the cause of what happens, and I may properly be held responsible, unless I have an adequate excuse or justification. When I move my finger, I am the cause of the finger’s motion. When I say ‘Good morning’ I am the cause of the sounding expression or utterance. True, the immediate causes are muscle contractions in the one case and lung, lip and tongue motions in the other. But this is compatible with me being the cause - perhaps, I cause these immediate causes, or, perhaps it just id the case that some events can have both an agent and other events as their cause.

All this is suggestive, but not really adequate. we do not understand the intended force of ‘I am the cause’ and more than we understand the intended force of ‘Snoring is not something I do’. If I trip and fall in your flower garden, ‘I am the cause’ of any resulting damage, but neither the damage nor my fall is my action.

In the considerations for which we approach to explaining what are actions, as contrasted with ‘mere’ doings, are. However, it will be convenient to say something about how they are to be individuated.

If I say ‘Good morning’ to you over the telephone, I have acted. But how many actions have O performed, and how are they related to one another and associated events? we may describe of what is done:

(1) Mov e my tongue and lips in certain ways, while exhaling.

(2) sat ‘Good morning’.

(3) Cause a certain sequence of modifications in the current flowing in your telephone.

(4) Say ‘Good morning’ to you.

(5) greet you.

The list - not exhaustive, by any means - is of act types. I have performed an action of each relation holds. I greet you by saying ‘Good morning’ to you, but not the converse, and similarity for the others on the list. But are these five distinct actions I performed, one of each type, or are the five descriptions all of a single action, which was of these five (and more) types. Both positions, and a variety of intermediate positions have been defended.

How many words are there within the sentence? : ‘The cat is on the mat’? There are on course, at best two answers to this question, precisely because one can enumerate the word types, either for which there are five, or that which there are six. Moreover, depending on how one chooses to think of word types another answer is possible. Since the sentence contains definite articles, nouns, a preposition and a verb, there are four grammatical different types of word in the sentence.

The type/token distinction, understood as a distinction between sorts of things, particular, the identity theory asserts that mental states are physical states, and this raises the question whether the identity in question if of types or token’.

During the past two decades or so, the concept of supervenience has seen increasing service in philosophy of mind. The thesis that the mental is supervenient on the physical - roughly, the claim that the mental character of a thing is wholly determined by its physical nature - has played a key role in the formulation of some influence on the mind-body problem. Much of our evidence for mind-body supervenience seems to consist in our knowledge of specific correlations between mental states and physical (in particular, neural) processes in humans and other organisms. Such knowledge, although extersive and in some ways impressive, is still quite rudimentary and far from complete (what do we know, or can we expect to know about the exact neural substrate for, say, the sudden thought that you are late with your rent payment this month?) It may be that our willingness to accept mind-body supervenience, although based in part on specific psychological dependencies, has to be supported by a deeper metaphysical commitment to the primary of the physical: It may in fact be an expression of such a commitment.

However, there are kinds of mental state that raise special issues for mind-body supervenience. One such kind is ‘wide content’ states, i.e., contentful mental states that seem to be individuated essentially by reference to objects and events outside the subject, e.g., the notion of a concept, like the related notion of meaning. The word ‘concept’ itself is applied to a bewildering assortment of phenomena commonly thought to be constituents of thought. These include internal mental representations, images, words, stereotypes, senses, properties, reasoning and discrimination abilities, mathematical functions. Given the lack of anything like a settled theory in this area, it would be a mistake to fasten readily on any one of these phenomena as the unproblematic referent of the term. One does better to make a survey of the geography of the area and gain some idea of how these phenomena might fit together, leaving aside for the nonce just which of them deserve to be called ‘concepts’ as ordinarily understood.

Concepts are the constituents of such propositions, just as the words ‘capitalist’, ‘exploit’, and ‘workers’ are constituents of the sentence. However, there is a specific role that concepts are arguably intended to play that may serve a point of departure. Suppose one person thinks that capitalists exploit workers, and another that they do not. Call the thing that they disagree about ‘a proposition’, e.g., capitalists exploit workers. It is in some sense shared by them as the object of their disagreement, and it is expressed by the sentence that follows the verb ‘thinks that’ mental verbs that take such verbs of ‘propositional attitude’. Nonetheless, these people could have these beliefs only if they had, inter alia, the concept’s capitalist exploit. And workers.

Propositional attitudes, and thus concepts, are constitutive of the familiar form of explanation (so-called ‘intentional explanation’) by which we ordinarily explain the behaviour and stares of people, many animals and perhaps, some machines. The concept of intentionality was originally used by medieval scholastic philosophers. It was reintroduced into European philosophy b y the German philosopher and psychologist Franz Clemens Brentano (1838-1917) whose thesis proposed in Brentano’s ‘Psychology from an Empirical Standpoint’(1874) that it is the ‘intentionality or directedness of mental states that marks off the mental from the physical.

Many mental states and activities exhibit the feature of intentionality, being directed at objects. Two related things are meant by this. First, when one desire or believes or hopes, one always desires or believes of hopes something. As, to assume that belief report (1) is true.

(1) That most Canadians believe that George Bush is a Republican.

Tenet (1) tells us that a subject ‘Canadians’ have a certain attitude, belief, to something, designated by the nominal phrase that George Bush is a Republican and identified by its content-sentence.

(2) George Bush is a Republican.

Following Russell and contemporary usage that the object referred to by the that-clause is tenet (1) and expressed by tenet (2) a proposition. Notice, too, that this sentence might also serve as most Canadians’ belief-text, a sentence whereby to express the belief that (1) reports to have. Such an utterance of (2) by itself would assert the truth of the proposition it expresses, but as part of (1) its role is not to assert anything, but to identify what the subject believes. This same proposition can be the object of other attitude s of other people. However, in that most Canadians may regret that Bush is a Republican yet, Reagan may remember that he is. Bushanan may doubt that he is.

Nevertheless, Brentano, 1960, we can focus on two puzzles about the structure of intentional states and activities, an area in which the philosophy of mind meets the philosophy of language, logic and ontology, least of mention, the term intentionality should not be confused with terms intention and intension. There is, nonetheless, an important connection between intention and intension and intentionality, for semantical systems, like extensional model theory, that are limited to extensions, cannot provide plausible accounts of the language of intentionality.

The attitudes are philosophically puzzling because it is not easy to see how the intentionality of the attitude fits with another conception of them, as local mental phenomena.

Beliefs, desires, hopes, and fears seem to be located in the heads or minds of the people that have them. Our attitudes are accessible to us through ‘introspection’. As most Canadians belief that Bush to be a Republican just by examining the ‘contents’ of his own mind: He does not need to investigate the world around him. we think of attitudes as being caused at certain times by events that impinge on the subject’s body, specially by perceptual events, such as reading a newspaper or seeing a picture of an ice-cream cone. In that, the psychological level of descriptions carries with it a mode of explanation which has no echo in ‘physical theory’. we regard ourselves and of each other as ‘rational purposive creatures, fitting our beliefs to the world as we inherently perceive it and seeking to obtain what we desire in the light of them’. Reason-giving explanations can be offered not only for action and beliefs, which will gain most attention, however, desires, intentions, hopes, dears, angers, and affections, and so forth. Indeed, their positioning within a network of rationalizing links is part of the individuating characteristics of this range of psychological states and the intentional acts they explain.

Meanwhile, these attitudes can in turn cause changes in other mental phenomena, and eventually in the observable behaviour of the subject. Seeing a picture of an ice cream cone leads to a desire for one, which leads me to forget the meeting I am supposed to attend and walk to the ice-cream pallor instead. All of this seems to require that attitudes be states and activities that are localized in the subject.

Nonetheless, the phenomena of intentionality suggests that the attitudes are essentially relational in nature: They involve relations to the propositions at which they are directed and at the objects they are about. These objects may be quite remote from the minds of subjects. An attitude seems to be individuated by the agent, the type of attitude (belief, desire, and so on), and the proposition at which it is directed. It seems essential to the attitude reported by its believing that, for example, that it is directed toward the proposition that Bush is a Republican. And it seems essential to this proposition that it is about Bush. But how can a mental state or activity of a person essentially involve some other individuals? The problem is brought out by two classical problems such that are called ‘no-reference’ and ‘co-reference’.

The classical solution to such problems is to suppose that intentional states are only indirectly related to concrete particulars, like George Bush, whose existence is contingent, and that can be thought about in a variety of ways. The attitudes directly involve abstract objects of some sort, whose existence is necessary, and whose nature the mind can directly grasp. These abstract objects provide concepts or ways of thinking of concrete particulars. That is to say, the involving characteristics of the different concepts, as, these, concepts corresponding to different inferential/practical roles in that different perceptions and memories give rise to these beliefs, and they serve as reasons for different actions. If we individuate propositions by concepts than individuals, the co-reference problem disappears.

The proposal has the bonus of also taking care of the no-reference problem. Some propositions will contain concepts that are not, in fact, of anything. These propositions can still be believed desired, and the like.

This basic idea has been worked out in different ways by a number of authors. The Austrian philosopher Ernst Mally thought that propositions involved abstract particulars that ‘encoded’ properties, like being the loser of the 1992 election, rather than concrete particulars, like Bush, who exemplified them. There are abstract particulars that encode clusters of properties that nothing exemplifies, and two abstract objects can encode different clusters of properties that are exemplified by a single thing. The German philosopher Gottlob Frége distinguished between the ‘sense’ and the ‘reference’ of expressions. The senses of George Bus hh and the person who will come in second in the election are different, even though the references are the same. Senses are grasped by the mind, are directly involved in propositions, and incorporate ‘modes of presentation’ of objects.

For most of the twentieth century, the most influential approach was that of the British philosopher Bertrand Russell. Russell (19051929) in effect recognized two kinds of propositions that assemble of a ‘singular proposition’ that consists separately in particular to properties in relation to that. An example is a proposition consisting of Bush and the properties of being a Republican. ‘General propositions’ involve only universals. The general proposition corresponding to someone is a Republican would be a complex consisting of the property of being a Republican and the higher-order property of being instantiated. The term ‘singular proposition’ and ‘general proposition’ are from Kaplan (1989.)

Historically, a great deal has been asked of concepts. As shareable constituents of the object of attitudes, they presumably figure in cognitive generalizations and explanations of animals’ capacities and behaviour. They are also presumed to serve as the meaning of linguistic items, underwriting relations of translation, definition, synonymy, antinomy and semantic implication. Much work in the semantics of natural language takes itself to be addressing conceptual structure.

Concepts have also been thought to be the proper objects of philosophical analysis, the activity practised by Socrates and twentieth-century ‘analytic’ philosophers when they ask about the nature of justice, knowledge or piety, and expect to discover answers by means of priori reflection alone.

The expectation that one sort of thing could serve all these tasks went hand in hand with what has come to be known for the ‘Classical View’ of concepts, according to which they have an ‘analysis’ consisting of conditions that are individually necessary and jointly sufficient for their satisfaction. Which are known to any competent user of them? The standard example is the especially simple one of the [bachelor], which seems to be identical to [eligible unmarried male]. A more interesting, but problematic one has been [knowledge], whose analysis was traditionally thought to be [justified true belief].

This Classical View seems to offer an illuminating answer to a certain form of metaphysical question: In virtue of what is something the kind of thing is, -, e.g., in virtue of what is a bachelor a bachelor? - and it does so in a way that supports counterfactuals: It tells us what would satisfy the concept in situations other than the actual ones (although all actual bachelors might turn out to be freckled. It’s possible that there might be unfreckled ones, since the analysis does not exclude that). The View also seems to offer an answer to an epistemological question of how people seem to know a priori (or, independently of experience) about the nature of many things, e.g., that bachelors are unmarried: It is constitutive of the competency (or, possession) conditions of a concept that they know its analysis, at least on reflection.

As it had been ascribed, in that Actions as Doings having Mentalistic Explanation: Coughing is sometimes like snoring and sometimes like saying ‘Good morning’ - that is, sometimes in mere doing and sometimes an action. And deliberate coughing can be explained by invoking an intention to cough, a desired to cough or some other ‘pro-attitude’ toward coughing, a reason for coughing or purpose in coughing or something similarly mental. Especially if we think of actions as ‘outputs’ of the mental machine’. The functionalist thinks of ‘mental states’ as events as causally mediating between a subject’s sensory inputs and the subject’s ensuing behaviour. Functionalism itself is the stronger doctrine that ‘what makes’ a mental state the type of state it is - a pain, a smell of violets, a belief that koalas are dangerous - are the functional relations it bears to the subject’s perceptual stimuli, behaviour responses and other mental states.

Twentieth-century functionalism gained as credibility in an indirect way, by being perceived as affording the least objectionable solution to the mind-body problem.

Disaffected from Cartesian dualism and from the ‘first-person’ perspective of introspective psychology, the behaviourists had claimed that there is nothing to the mind but the subject’s behaviour and dispositions to behave. To refute the view that a certain level of behavioural dispositions is necessary for a mental life, we need convincing cases of thinking stones, or utterly incurable paralytics or disembodied minds. But these alleged possibilities are to some merely that.

To refute the view that a certain level of behavioural dispositions is sufficient for a mental life, we need convincing cases rich behaviour with no accompanying mental states. The typical example is of a puppet controlled by radio-wave links, by other minds outside the puppet’s hollow body. But one might wonder whether the dramatic devices are producing the anti-behaviorist intuition all by themselves. And how could the dramatic devices make a difference to the facts of the casse? If the puppeteers were replaced by a machine, not designed by anyone, yet storing a vast number of input-output conditionals, which was reduced in size and placed in the puppet’s head, do we still have a compelling counterexample, to the behaviour-as-sufficient view? At least it is not so clear.

Such an example would work equally well against the anti-eliminativist version of which the view that mental states supervene on behavioural disposition. But supervenient behaviourism could be refitted by something less ambitious. The ‘X-worlders’ of the American philosopher Hilary Putnam (1926-), who are in intense pain but do not betray this in their verbal or non-verbal behaviour, behaving just as pain-free human beings, would be the right sort of case. However, even if Putnam has produced a counterexample for pain - which the American philosopher of mind Daniel Clement Dennett (1942-), for one would doubtless deny - an ‘X-worlder’ story to refute supervenient behaviourism with respect to the attitudes or linguistic meaning will be less intuitively convincing. Behaviourist resistance is easier for the reason that having a belief or meaning a certain thing, lack distinctive phenomemologies.

There is a more sophisticated line of attack. As, the most influential American philosopher of the latter half of the 20th century philosopher Willard von Orman Quine (1908-2000) has remarked some have taken his thesis of the indeterminacy of translation as a reductio of his behaviourism. For this to be convincing, Quines argument for the indeterminacy thesis and to be persuasive in its own and that is a disputed matter.

If behaviourism is finally laid to rest to the satisfaction of most philosophers, it will probably not by counterexamples, or by a reductio from Quine’s indeterminacy thesis. Rather, it will be because the behaviorists worries about other minds, and the public availability of meaning have been shown to groundless, or not to require behaviourism for their solution. But we can be sure that this happy day will take some time to arrive.

Quine became noted for his claim that the way one uses’ language determines what kinds of things one is committed to saying exist. Moreover, the justification for speaking one way rather than another, just as the justification for adopting one conceptual system rather than another, was a thoroughly pragmatic one for Quine (see Pragmatism). He also became known for his criticism of the traditional distinction between synthetic statements (empirical, or factual, propositions) and analytic statements (necessarily true propositions). Quine made major contributions in set theory, a branch of mathematical logic concerned with the relationship between classes. His published works include Mathematical Logic (1940), From a Logical Point of View (1953), Word and Object (1960), Set Theory and Its Logic (1963), and: An Intermittently Philosophical Dictionary (1987). His autobiography, The Time of My Life, appeared in 1985.

Functionalism, and cognitive psychology considered as a complete theory of human thought, inherited some of the same difficulties that earlier beset behaviouralism and identity theory. These remaining obstacles fall unto two main categories: Intentionality problems and Qualia problems.

Propositional attitudes such as beliefs and desires are directed upon states of affairs which may or may not actually obtain, e.g., that the Republican or let alone any in the Liberal party will win, and are about individuals who may or may not exist, e.g., King Arthur. Franz Brentano raised the question of how are purely physical entity or state could have the property of being ‘directed upon’ or about a non-existent state of affairs or object: That is not the sort of feature that ordinary, purely physical objects can have.

The standard functionalist reply is that propositional attitudes have Brentano’s feature because the internal physical states and events that realize them ‘represent’ actual or possible states of affairs. What they represent is determined at least in part, by their functional roles: Is that, mental events, states or processes with content involve reference to objects, properties or relations, such as a mental state with content can fail to refer, but there always exists a specific condition for a state with content to refer to certain things? As when the state gas a correctness or fulfilment condition, its correctness is determined by whether its referents have the properties the content specifies for them.

What is it that distinguishes items that serve as representations from other objects or events? And what distinguishes the various kinds of symbols from each other? Firstly, there has been general agreement that the basic notion of a representation involves one thing’s ‘standing for’, ‘being about’, ‘pertain to’, ‘referring or denoting of something else entirely’. The major debates here have been over the nature of this connection between a representation and that which it represents. As to the second, perhaps the most famous and extensive attempt to organize and differentiated among alternative forms of the representation is found in the works of C.S. Peirce (1931-1935). Peirce’s theory of sign in complex, involving a number of concepts and distinctions that are no longer paid much heed. The aspect of his theory that remains influential and is widely cited, is his division of signs into Icons, Indices and Symbols. Icons are signs that are said to be like or resemble the things they represent, e.g., portrait paintings. Indices are signs that are connected to their objects by some causal dependency, e.g., smoke as a sign of fire. Symbols are those signs that are related to their object by virtue of use or association: They are arbitrary labels, e.g., the word ‘table’. The divisions among signs, or variants of this division, is routinely put forth to explain differences in the way representational systems are thought to establish their links to the world. Further, placing a representation in one of the three divisions has been used to account for the supposed differences between conventional and non-conventional representation, between representation that do and do not require learning to understand, and between representations, like language, that need to be read, and those which do not require interpretation. Some theorists, moreover, have maintained that it is only the use of Symbols that exhibits or indicate s the presence of mind and mental states.

Representations, along with mental states, especially beliefs and thoughts, are said to exhibit ‘intentionality’ in that they refer or to stand for something else. The nature of this special property, however, has seemed puzzling. Not only is intentionality often assumed to be limited to humans, and possibly a few other species, but the property itself appears to resist characterization in physicalist terms. The problem is most obvious in the case of ‘arbitrary’ signs, like words. Where it is clear that there is no connection between the physical properties of as word and what it denotes, that, wherein, the problem also remains for Iconic representation.

In at least, there are two difficulties. One is that of saying exactly ‘how’ a physical item’s representational content is determined, in not by the virtue of what does a neurophysiological state represent precisely that the available candidate will win? An answer to that general question is what the American philosopher of mind, Alan Jerry Fodor (1935-) has called a ‘psychosemantics’, and several attempts have been made. Taking the analogy between thought and computation seriously, Fodor believes that mental representations should be conceived as individual states with their own identities and structures, like formulae transformed by processes of computations or thought. His views are frequently contrasted with those of ‘holiest’ such as the American philosopher Herbert Donald Davidson (1917-2003), whose constructions within a generally ‘holistic’ theory of knowledge and meaning. Radical interpreter can tell when a subject holds a sentence true, and using the principle of ‘clarity’ ends up making an assignment of truth condition is a defender of radical translation and the inscrutability of reference’, Holist approach has seemed to many has seemed to many to offer some hope of identifying meaning as a respectable notion, eve n within a broadly ‘extensional’ approach to language. Instructionalists about mental ascription, such as Clement Daniel Dennett (19420) who posits the particularity that Dennett has also been a major force in illuminating how the philosophy of mind needs to be informed by work in surrounding sciences.

In giving an account of what someone believes, does essential reference have to be made to how things are in the environment of the believer? And, if so, exactly what reflation does the environment have to the belief? These questions involve taking sides in the externalism and internalism debate. To a first approximation, the externalist holds that one’s propositional attitude cannot be characterized without reference to the disposition of object and properties in the world - the environment - in which in is simulated. The internalist thinks that propositional attitudes (especially belief) must be characterizable without such reference. The reason that this is only a first approximation of the contrast is that there can be different sorts of externalism. Thus, one sort of externalist might insist that you could not have, say, a belief that grass is green unless it could be shown that there was some relation between you, the believer, and grass. Had you never come across the plant which makes up lawns and meadows, beliefs about grass would not be available to you. However, this does not mean that you have to be in the presence of grass in order to entertain a belief about it, nor does it even mean that there was necessarily a time when you were in its presence. For example, it might have been the case that, though you have never seen grass, it has been described to you. Or, at the extreme, perhaps no longer exists anywhere in the environment, but you ancestor’s contact with it left some sort of genetic trace in you, and the trace is sufficient to give rise to a mental state that could be characterized as about grass.

At the more specific level that has been the focus in recent years: What do thoughts have in common in virtue of which they are thoughts? What is, what makes a thought a thought? What makes a pain a pain? Cartesian dualism said the ultimate nature of the mental was to be found in a special mental substance. Behaviourism identified mental states with behavioural disposition: Physicalism in its most influential version identifies mental states with brain states. One could imagine that the individual states that occupy the relevant causal roles turn out not to be bodily stares: For example, they might instead be states of an Cartesian unextended substance. But its overwhelming likely that the states that do occupy those causal roles are all tokens of bodily-state types. However, a problem does seem to arise about properties of mental states. Suppose ‘pain’ is identical with a certain firing of c-fibres. Although a particular pain is the very same state as a neural firing, we identify that state in two different ways: As a pain and as a neural firing. The state will therefore have certain properties in virtue of which we identify it as a pain and others in virtue of which we identify it as a pain will be mental properties, whereas those in virtue of which we identify it as a neural firing will be physical properties. This has seemed to many to lead to a kind of dualism at the level of the properties of mental states. Even if we reject a dualism of substances and take people simply to be physical organisms, those organisms still have both mental and physical states. Similarly, even if we identify those mental states with certain physical states, those states will nonetheless have both mental and physical properties. So, disallowing dualism with respect to substances and their stares simply leads to its reappearance at the level of the properties of those states.

The problem concerning mental properties is widely thought to be most pressing for sensations, since the painful quality of pains and the red quality of visual sensation seem to be irretrievably physical. So, even if mental states are all identical with physical states, these states appear to have properties that are not physical. And if mental states do actually have non-physical properties, the identity of mental with physical states would not sustain a thoroughgoing mind-body materialism.

A more sophisticated reply to the difficulty about mental properties is due independently to D.M. Armstrong (1968) and David Lewis (1972), who argue that for a state to be a particular sort of intentional state or sensation is for that state to bear characteristic causal relations to other particular occurrences. The properties in virtue of which we identify states as thoughts or sensations will still be neutral as between being mental or physical, since anything can bear a causal relation to anything else. But causal connections have a better chance than similarity in some unspecified respect t of capturing the distinguishing properties of sensation and thoughts.

It should be mentioned that the properties can be more complex and complicating than the above allows. For instance, in the sentence, ‘John is married to Mary’, we are attributing to John the property of being married. And, unlike the property of being bald, this property of John is essentially relational. Moreover, it is commonly said that ‘is married to’ expresses a relation, than a property, though the terminology is not fixed, but, some authors speak of relations as different from properties in being more complex but like them in being non-linguistic, though it is more common to treat relations as a sub-class of properties.

The Classical view, meanwhile, has always had to face the difficulty of ‘primitive’ concepts: It’s all well and good to claim that competence consists in some sort of mastery of a definition, but what about the primitive concepts in which a process of definition must ultimately end? There the British Empiricism of the seventeenth century began to offer a solution: All the primitives were sensory. Indeed, they expanded the Classical view to include the claim, now often taken uncritically for granted in discussions of that view, that all concepts are ‘derived from experience’: ‘Every idea is derived from a corresponding impression’. In the work of John Locke (1682-1704), George Berkeley (1685-1753) and David Hume (1711-76) as it was thought to mean that concepts were somehow ‘composed’ of introspectible mental items - images -, ‘impressions’ - that were ultimately decomposable into basic sensory parts. Thuds, Hume analyzed the concept of [material object] as involving certain regularities in our sensory experience, and [cause] as involving conjunction.

Berkeley noticed a problem with this approach that every generation has had to rediscover: If a concept is a sensory impression, like an image, then how does one distinguish a general concept [triangle] from a more particular one - say, [isosceles triangle] - that would serve in imaging the general one. More recent, Wittgenstein (1953) called attention to the multiple ambiguity of images. And, in any case, images seem quite hopeless for capturing the concepts associated with logical terms (what is the image for negation or possibility?) Whatever the role of such representation, full conceptual competence must involve something more.

Indeed, in addition to images and impressions and other sensory items, a full account of concepts needs to consider issues of logical structure. This is precisely what ‘logical postivists’ did, focussing on logically structured sentences instead of sensations and images, transforming the empiricalist claim into the famous’ Verifiability Theory of Meaning’: The meaning of a sentence is the means by which it is confirmed or refuted. Ultimately by sensory experience, the meaning or concept associated with a predicate is the means by which people confirm or refute whether something satisfies it.

This once-popular position has come under much attack in philosophy in the last fifty years. In the first place, few, if any, successful ‘reductions’ of ordinary concepts like, [material objects], [cause] to purely sensory concepts have ever been achieved, as Jules Alfred Ayer (1910-89) proved to be one of the most important modern epistemologists, his first and most famous book, ‘Language, Truth and Logic’, to the extent that epistemology is concerned with the a priori justification of our ordinary or scientific beliefs, since the validity of such beliefs ‘is an empirical matter, which cannot be settled by such means. However, he does take positions which have been bearing on epistemology. For example, he is a phenomenalists, believing that material objects are logical constructions out of actual and possible sense-experience, and an anti-foundationalism, at least in one sense, denying that there is a bedrock level of indubitable propositions on which empirical knowledge can be based. As regards the main specifically epistemological problem he addressed, the problem of our knowledge of other minds, he is essentially behaviouristic, since the verification principle pronounces that the hypothesis of the occurrences intrinsically inaccessible experience is unintelligible.

Although his views were later modified, he early maintained that all meaningful statements are either logical or empirical. According to his principle of verification, a statement is considered empirical only if some sensory observation is relevant to determining its truth or falseness. Sentences that are neither logical nor empirical - including traditional religious, metaphysical, and ethical sentences - are judged nonsensical. Other works of Ayer include The Problem of Knowledge (1956), the Gifford Lectures of 1972-73 published as The Central Questions of Philosophy (1973), and Part of My Life: The Memoirs of a Philosopher (1977).

Ayer’s main contribution to epistemology are in his book, ‘The Problem of Knowledge’ which he himself regarded as superior to ‘Language, Truth and Logic’ (Ayer 1985), soon there after Ayer develops a fallibilist type of foundationalism, according to which processes of justification or verification terminate in someone’s having an experience, but there is no class of infallible statements based on such experiences. Consequently, in making such statements based on experience, even simple reports of observation we ‘make what appears to be a special sort of advance beyond our data’ (1956). And it is the resulting gap which the sceptic exploits. Ayer describes four possible responses to the sceptic: Naïve realism, according to which materia l objects are directly given in perception, so that there is no advance beyond the data: Reductionism, according to which physical objects are logically constructed out of the contents of our sense-experiences, so that again there is no real advance beyond the data: A position according to which there is an advance, but it can be supported by the canons of valid inductive reasoning and lastly a position called ‘descriptive analysis’, according to which ‘we can give an account of the procedures that we actually follow . . . but there [cannot] be a proof that what we take to be good evidence really is so’.

Ayer’s reason why our sense-experiences afford us grounds for believing in the existence of physical objects is simply that sentence which are taken as referring to physical objects are used in such a way that our having the appropriate experiences counts in favour of their truths. In other words, having such experiences is exactly what justification of or ordinary beliefs about the nature of the world ‘consists in’. This suggestion is, therefore, that the sceptic is making some kind of mistake or indulging in some sort of incoherence in supposing that our experience may not rationally justify our commonsense picture of what the world is like. Again, this, however, is the familiar fact that th sceptic’s undermining hypotheses seem perfectly intelligible and even epistemically possible. Ayer’s response seems weak relative to the power of the sceptical puzzles.

The concept of ‘the given’ refers to the immediate apprehension of the contents of sense experience, expressed in the first person, present tense reports of appearances. Apprehension of the given is seen as immediate both in a casual sense, since it lacks the usual causal chain involved in perceiving real qualities of physical objects, and in an epistemic sense, since judgements expressing it are justified independently of all other beliefs and evidence. Some proponents of the idea of the given maintain that its apprehension is absolutely certain: Infallible, incorrigible and indubitable. It has been claimed also that a subject is omniscient with regard to the given: If a property appears, then the subject knows this.

The doctrine dates back at least to Descartes, who argued in Meditation II that it was beyond all possible doubt and error that he seemed to see light, hear noise, and so forth. The empiricist added the claim that the mind is passive in receiving sense impressions, so that there is no subjective contamination or distortion here (even though the states apprehended are mental). The idea was taken up in twentieth-century epistemology by C.I. Lewis and A.J. Ayer. Among others, who appealed to the given as the foundation for all empirical knowledge. Nonetheless, empiricism, like any philosophical movement, is often challenged to show how its claims about the structure of knowledge and meaning can themselves be intelligible and known within the constraints it accepts, since beliefs expressing only the given were held to be certain and justified in themselves, they could serve as solid foundations.

The second argument for the need for foundations is sound. It appeals to the possibility of incompatible but fully coherent systems of belief, only one of which could be completely true. In light of this possibility, coherence cannot suffice for complete justification, as coherence has the power to produce justification, while according to a negative coherence theory, coherence has only the power to nullify justification. However, by contrast, justification is solely a matter of how a belief coheres with a system of beliefs. Nonetheless, another distinction that cuts across the distinction between weak and strong coherence theories of justification. It is the distinction between positive and negative coherence theory tells us that if a belief coheres with a background system of belief, then the belief is justified.

Coherence theories of justification have a common feature, namely, that they are what are called ‘internalistic theories of justification’ they are theories affirming that coherence is a matter of internal relations between beliefs and justification is a matter of coherence. If, then, justification is solely a matter of internal relations between beliefs, we are left with the possibility that the internal relations might fail to correspond with any external reality. How, one might object, can a completely internal subjective notion of justification bridge the gap between mere true belief, which might be no more than a lucky guess, and knowledge, which must be grounded in some connection between internal subjective condition and external objective realities?

The answer is that it cannot and that something more than justified true belief is required for knowledge. This result has, however, been established quite apart from considerations of coherence theories of justification. What is required may be put by saying that the justification one must be undefeated by errors in the background system of belief. A justification is undefeated by error in the background system of belied would sustain the justification of the belief on the basis of the corrected system. So knowledge, on this sort of positive coherence theory, is true belief that coheres with the background belief system and corrected versions of that system. In short, knowledge is true belief plus justification resulting from coherence and undefeated by error.

Without some independent indication that some of the beliefs within a coherent system are true, coherence in itself is no indication of truth. Fairy stories can cohere. But our criteria for justification must indicate to us the probable truth of our beliefs. Hence, within any system of beliefs there must be some privileged class of beliefs which others must cohere to be justified. In the case of empirical knowledge, such privileged beliefs must represent the point of contact between subject and the world: They must originate in perception. When challenged, however, we justify our ordinary perceptual beliefs about physical properties by appeal to beliefs about appearances. Nonetheless, it seems more suitable as foundations since there is no class of more certain perceptual beliefs to which we appeal for their justification.

The argument that foundations must be certain was offered by the American philosopher David Lewis (1941-2002). He held that no proposition can be probable unless some are certain. If the probability of all propositions or beliefs were relative to evidence expressed in others, and if these relations were linear, then any regress would apparently have to terminate in propositions or beliefs that are certain. But Lewis shows neither that such relations must be linear nor that regresses cannot terminate in beliefs that are merely probable or justified in themselves without being certain or infallible.

Arguments against the idea of the given originate with the German philosopher and founder of critical philosophy. Immanuel Kant (1724-1804), whereby the intellectual landscape in which Kant began his career was largely set by the German philosopher, mathematician and polymath of Gottfried Wilhelm Leibniz (1646-1716), filtered through the principal follower and interpreter of Leibniz, Christian Wolff, who was primarily a mathematician but renowned as a systematic philosopher. Kant, who argues in Book I to the Transcendental Analysis that percepts without concepts do not yet constitute any form of knowing. Being non-epistemic, they presumably cannot serve as epistemic foundations. Once we recognize that we must apply concepts of properties to appearances and formulate beliefs utilizing those concepts before the appearances can play any epistemic role. It becomes more plausible that such beliefs are fallible. The argument was developed in this century by Sellars (1912-89), whose work revolved around the difficulties of combining the scientific image of people and their world, with the manifest image, or natural conception of ourselves as acquainted with intentions, meaning, colours, and other definitive aspects by his most influential paper ‘Empiricism and the Philosophy of Mind’ (1956) in this and many other of his papers, Sellars explored the nature of thought and experience. According to Sellars (1963), the idea of the given involves a confusion between sensing particular (having sense impression) which is non-epistemic, and having non-inferential knowledge of propositions referring to appearances be necessary for acquiring perceptual knowledge, but it is itself a primitive kind of knowing. Its being non-epistemic renders it immune from error, also, unsuitable for epistemological foundations. The apparentness to the non-inferential perceptual knowledge, is fallible, requiring concepts acquired through trained responses to public physical objects.

The contention that even reports of appearances are fallible can be supported from several directions. First, it seems doubtful that we can look beyond our beliefs to compare them with an unconceptualized reality, whether mental of physical. Second, to judge that anything, including an appearance, is ‘F’, we must remember which property ‘F’ is, and memory is admitted by all to be fallible. Our ascribing ‘F’ is normally not explicitly comparative, but its correctness requires memory, nevertheless, at least if we intend to ascribe a reinstantiable property. we must apply the concept of ‘F’ consistently, and it seems always at least logically possible to apply it inconsistently. If that be, it is not possible, if, for example, I intend in tendering to an appearance e merely to pick out demonstratively whatever property appears, then, I seem not to be expressing a genuine belief. My apprehension of the appearance will not justify any other beliefs. Once more it will be unsuitable as an epistemological foundation. This, nonetheless, nondifferential perceptual knowledge, is fallible, requiring concepts acquiring through trained responses to public physical objects.

Ayer (1950) sought to distinguish propositions expressing the given not by their infallibility, but by the alleged fact that grasping their meaning suffices for knowing their truth. However, this will be so only if the purely demonstratives meaning, and so only if the propositions fail to express beliefs that could ground others. If in uses genuine predicates, for example: C≠ as applied to tones, then one may grasp their meaning and yet be unsure in their application to appearances. Limiting claims of error in claims eliminates one major source of error in claims about physical objects - appearances cannot appear other than they are. Ayer’s requirement of grasping meaning eliminates a second source of error, conceptual confusion. But a third major source, misclassification, is genuine and can obtain in this limited domain, even when Ayer ‘s requirement is satisfied.

Any proponent to the given faces the dilemma that if in terms used in statements expressing its apprehension are purely demonstrative, then such statements, assuming they are statements, are certain, but fail to express beliefs that could serve as foundations for knowledge. If what is expressed is not awareness of genuine properties, then awareness does not justify its subject in believing anything else. However, if statements about what appears use genuine predicates that apply to reinstantiable properties, then beliefs expressed cannot be infallible or knowledge. Coherentists would add that such genuine belief’s stand in need of justification themselves and so cannot be foundations.

Contemporary foundationalist deny the coherent’s claim while eschewing the claim that foundations, in the form of reports about appearances, are infallible. They seek alternatives to the given as foundations. Although arguments against infallibility are strong, other objections to the idea of foundations are not. That concepts of objective properties are learned prior to concepts of appearances, for example, implies neither that claims about objective properties, nor that the latter are prior in chains of justification. That there can be no knowledge prior to the acquisition and consistent application of concepts allows for propositions whose truth requires only consistent application of concepts, and this may be so for some claims about appearances.

Coherentist will claim that a subject requires evidence that he apply concepts consistently to distinguish red from other colours that appear. Beliefs about red appearances could not then be justified independently of other beliefs expressing that evidence. Save that to part of the doctrine of the given that holds beliefs about appearances to be self-justified, we require an account of how such justification is possible, how some beliefs about appearances can be justified without appeal to evidence. Some foundationalist’s simply assert such warrant as derived from experience but, unlike, appeals to certainty by proponents of the given, this assertion seem ad hoc.

A better strategy is to tie an account of self-justification to a broader exposition of epistemic warrant. On such accounts sees justification as a kind of inference to the best explanation. A belief is shown to be justified if its truth is shown to be part of the best explanation for why it is held. A belief is self-justified if the best explanation for it is its truth alone. The best explanation for the belief that I am appeared to redly may be that I am. Such accounts seek ground knowledge in perceptual experience without appealing to an infallible given, now universally dismissed.

Nonetheless, it goes without saying, that many problems concerning scientific change have been clarified, and many new answers suggested. Nevertheless, concepts central to it, like ‘paradigm’. ‘core’, problem’, ‘constraint’, ‘verisimilitude’, many devastating criticisms of the doctrine based on them have been answered satisfactorily.

Problems centrally important for the analysis of scientific change have been neglected. There are, for instance, lingering echoes of logical empiricism in claims that the methods and goals of science are unchanging, and thus are independent of scientific change itself, or that if they do change, they do so for reasons independent of those involved in substantive scientific change itself. By their very nature, such approaches fail to address the changes that actually occur in science. For example, even supposing that science ultimately seeks the general and unaltered goal of ‘truth’ or ‘verisimilitude’, that injunction itself gives no guidance as to what scientists should seek or how they should go about seeking it. More specific scientific goals do provide guidance, and, as the transition from mechanistic to gauge-theoretic goals illustrates, those goals are often altered in light of discoveries about what is achievable, or about what kinds of theories are promising. A theory of scientific change should account for these kinds of goal changes, and for how, once accepted, they alter the rest of the patterns of scientific reasoning and change, including ways in which more general goals and methods may be reconceived.

To declare scientific changes to be consequences of ‘observation’ or ‘experimental evidence’ is again to overstress the superficially unchanging aspects of science. we must ask how what counts as observation, experiments, and evidence themselves alter in the light of newly accepted scientific beliefs. Likewise, it is now clear that scientific change cannot be understood in terms of dogmatically embraced holistic cores: The factors guiding scientific change are by no means the monolithic structure which they have been portrayed as being. Some writers prefer to speak of ‘background knowledge’ (or ‘information’) as shaping scientific change, the suggestion being that there are a variety of ways in which a variety of prior ideas influence scientific research in a variety of circumstances. But it is essential that any such complexity of influences be fully detailed, not left, as by the philosopher of science Raimund Karl Popper (1902-1994), with cursory treatment of a few functions selected to bolster a prior theory (in this case, falsification). Similarly, focus on ‘constraints’ can mislead, suggesting too negative a concept to do justice to the positive roles of the information utilized. Insofar as constraints are scientific and not trans-scientific, they are usually ‘functions’, not ‘types’ of scientific propositions.

Traditionally, philosophy has concerned itself with relations between propositions which are specifically relevant to one another in form or content. So viewed, a philosophical explanation of scientific change should appeal to factors which are clearly more scientifically relevant in their content to the specific directions of new scientific research and conclusions than are social factors whose overt relevance lies elsewhere. Nonetheless, in recent years many writers, especially in the ‘strong programme’ practices must be assimilated to social influences.

Such claims are excessive. Despite allegations that even what counted as evidence is a matter of mere negotiated agreement, many consider that the last word has not been said on the idea that there is in some deeply important sense of a ‘given’ to experience in terms with which we can, at least partially, judge theories (‘background information’) which can help guide those and other judgements. Even if ewe could, no information to account for what science should and can be, and certainly not for what it is often in human practice, neither should we take the criticism of it for granted, accepting that scientific change is explainable only by appeal to external factors.

Equally, we cannot accept too readily the assumption (another logical empiricist legacy) that our task is to explain science and its evolution by appeal to meta-scientific rules or goals, or metaphysical principles, arrived at in the light of purely philosophical analysis, and altered (if at all) by factors independent of substantive science. For such trans-scientific analysis, even while claiming to explain ‘what science is’, do so in terms ‘external’ to the processes by which science actually changes.

Externalist claims are premature by enough is yet understood about the roles of indisputably scientific considerations in shaping scientific change, including changes of methods and goals. Even if we ultimately cannot accept the traditional ‘internalist’ approach to philosophy of science, as philosophers concerned with the form and content of reasoning we must determine accurately how far it can be carried. For that task. Historical and contemporary case studies are necessary but insufficient: Too often the positive implications of such studies are left unclear, and their too hasty assumption is often that whatever lessons are generated therefrom apply equally to later science. Larger lessons need to be extracted from concrete studies. Further, such lessons must, there possible, be given a systematic account, integrating the revealed patterns of scientific reasoning and the ways they are altered into a coherent interpretation of the knowledge-seeking enterprise - a theory of scientific change. Whether such efforts are successful or not, or through understanding our failure to do so, that it will be possible to assess precisely the extent to which trans-scientific factors (meta-scientific, social, or otherwise) must be included in accounts of scientific change.

Much discussion of scientific change on or upon the distinction between contexts of discovery and justification that is to say about discovery that there is usually thought to be no authoritative confirmation theory, telling how bodies of evidence support, a hypothesis instead science proceeds by a ‘hypothetico-deductive method’ or ‘method of conjectures and refutations’. By contrast, early inductivists held that (1) science e begins with data collections (2) rules of inference are applied to the data to obtain a theoretical conclusion, or at least, to eliminate alternatives, and (3) that conclusion is established with high confidence or even proved conclusively by the rules. Rules of inductive reasoning were proposed by the English diplomat and philosopher Francis Bacon (1561-1626) and by the British mathematician and physicists and principal source of the classical scientific view of the world, Sir Isaac Newton (1642-1727) in th e second edition of the Principia (‘Rules of Reasoning in Philosophy’). Such procedures were allegedly applied in Newton’s ‘Opticks’ and in many eighteenth-century experimental studies of heat, light, electricity, and chemistry.

According to Laudan (1981), two gradual realizations led to rejection of this conception of scientific method: First, that inferences from facts to generalizations are not established with certain, hence sectists were more willing to consider hypotheses with little prior empirical grounding, Secondly, that explanatory concepts often go beyond sense experience, and that such trans-empirical concepts as ‘atom’ and ‘field’ can be introduced in the formulation of such hypothesis, thus, as the middle of the eighteenth century, the inductive conception began to be replaced by the middle of hypothesis, or hypothetico-deductive method. On the view, the other of events in science is seen as, first, introduction of a hypothesis and second, testing of observational production of that hypothesis against observational and experimental results.

Twentieth-century relativity and quantum mechanics alerted scientists even more to the potential depths of departures from common sense and earlier scientific ideas, e.g., quantum theory. Their attention was called from scientific change and direct toward an analysis of temporal ‘formal’ characteristics of science: The dynamical character of science,, emphasized by physics, was lost in a quest for unchanging characteristics deffinitary of science and its major components, i.e., ‘content’ of thought, the ‘meanings’ of fundamental ‘meta-scientific’ concepts and method-deductive conception of method, endorsed by logical empiricist, was likewise construed in these terms: ‘Discovery’, the introduction of new ideas, was grist for historians, psychologists or sociologists, whereas the ‘justification’ of scientific ideas was the application of logic and thus, the proper object of philosophy of science.

The fundamental tenet of logical empiricism is that the warrant for all scientific knowledge rests on or upon empirical evidence I conjunction with logic, where logic is taken to include induction or confirmation, as well as mathematics and formal logic. In the eighteenth century the work of the empiricist John Locke (1632-1704) had important implications for other social sciences. The rejection of innate ideas in book I of the Essay encouraged an emphasis on the empirical study of human societies, to discover just what explained their variety, and this toward the establishment of the science of social anthropology.

Induction (logic), in logic, is the process of drawing a conclusion about an object or event that has yet to be observed or occur, based on previous observations of similar objects or events. For example, after observing year after year that a certain kind of weed invades our yard in autumn, we may conclude that next autumn our yard will again be invaded by the weed; or having tested a large sample of coffee makers, only to find that each of them has a faulty fuse, we conclude that all the coffee makers in the batch are defective. In these cases we infer, or reach a conclusion based on observations. The observations or assumptions on which we base the inference - the annual appearance of the weed, or the sample of coffee makers with faulty fuses - form the premises or assumptions.

In an inductive inference, the premises provide evidence or support for the conclusion; this support can vary in strength. The argument’s strength depends on how likely it is that the conclusion will be true, assuming all of the premises to be true. If assuming the premises to be true makes it highly probable that the conclusion also would be true, the argument is inductively strong. If, however, the supposition that all the premises are true only slightly increases the probability that the conclusion will be true, the argument is inductively weak.

The truth or falsity of the premises or the conclusion is not at issue. Strength instead depends on whether, and how much, the likelihood of the conclusion’s being true would increase if the premises were true. So, in induction, as in deduction, the emphasis is on the form of support that the premises provide to the conclusion. However, induction differs from deduction in a crucial aspect. In deduction, for an argument to be correct, if the premises were true, the conclusion would have to be true as well. In induction, however, even when an argument is inductively strong, the possibility remains that the premises are true and the conclusion false. To return to our examples, although it is true that this weed has invaded our yard every year, it remains possible that the weed could die and never reappear. Likewise, it is true that all of the coffee makers tested had faulty fuses, but it is possible that the remainder of the coffee makers in the batch is not defective. Yet it is still correct, from an inductive point of view, to infer that the weed will return, and that the remainder of the coffee makers has faulty fuses.

Thus, strictly speaking, all inductive inferences are deductively invalid. Yet induction is not worthless; in both everyday reasoning and scientific reasoning regarding matters of fact - for instance in trying to establish general empirical laws - induction plays a central role. In an inductive inference, for example, we draw conclusions about an entire group of things, or a population, based on data about a sample of that group or population; or we predict the occurrence of a future event because of observations of similar past events; or we attribute a property to a non-observed thing as all observed things of the same kind have that property; or we draw conclusions about causes of an illness based on observations of symptoms. Inductive inference is used in most fields, including education, psychology, physics, chemistry, biology, and sociology. Consequently, because the role of induction is so central in our processes of reasoning, the study of inductive inference is one major area of concern to create computer models of human reasoning in Artificial Intelligence.

The development of inductive logic owes a great deal to 19th-century British philosopher John Stuart Mill, who studied different methods of reasoning and experimental inquiry in his work ‘A System of Logic’‘(1843), by which Mill was chiefly interested in studying and classifying the different types of reasoning in which we start with observations of events and go on to infer the causes of those events. In, ‘A Treatise on Induction and Probability’ (1960), 20th-century Finnish philosopher Georg Henrik von Wright expounded the theoretical foundations of Mill’s methods of inquiry.

Philosophers have struggled with the question of what justification we have to take for granted induction’s common assumptions: that the future will follow the same patterns as the past; that a whole population will behave roughly like a randomly chosen sample; that the laws of nature governing causes and effects are uniform; or that we can presume that several observed objects give us grounds to attribute something to another object we have not yet observed. In short, what is the justification for induction itself? This question of justification, known as the problem of induction, was first raised by 18th-century Scottish philosopher David Hume in his An Enquiry Concerning Human Understanding (1748). While it is tempting to try to justify induction by pointing out that inductive reasoning is commonly used in both everyday life and science, and its conclusions are, largely, been corrected, this justification is itself an induction and therefore it raises the same problem: Nothing guarantees that simply because induction has worked in the past it will continue to work in the future. The problem of induction raises important questions for the philosopher and logician whose concern it is to provide a basis of assessment of the correctness and the value of methods of reasoning.

In the eighteenth century, Lock’s empiricism and the science of Newton were, with reason, combined in people’s eyes to provide a paradigm of rational inquiry that, arguably, has never been entirely displaced. It emphasized the very limited scope of absolute certainties in the natural and social sciences, and more generally underlined the boundaries to certain knowledge that arise from our limited capacities for observation and reasoning. To that extent it provided an important foil to the exaggerated claims sometimes made for the natural sciences in the wake of Newton’s achievements in mathematical physics.

This appears to conflict strongly with Thomas Kuhn’s (1922-96) statement that scientific theory choice depends on considerations that go beyond observation and logic, even when logic is construed to include confirmation.

Nonetheless, it can be said, that, the state of science at any given time is characterized, in part, by the theories accepted then. Presently accepted theories include quantum theory, and general theory of relativity, and the modern synthesis of Darwin and Mendel, as well as lower-level, but still clearly theoretical assertions such as that DNA has a double-helical structure, that the hydrogen atom contains a single electron, and so firth. What precisely is involved in accepting a theory or factors in theory choice.

Many critics have been scornful of the philosophical preoccupation with under-determination, that a theory is supported by evidence only if it implies some observation categories. However, following the French physician Pierre Duhem, who is remembered philosophically for his La Thêorie physique, (1906), translated as, ‘The Aim and Structure of Science, is that it simply is a device for calculating science provides a deductive system that is systematic, economic and predicative: Following Duhem, Orman van Willard Quine (1918-2000), who points out that observation categories can seldom if ever be deduced from a single scientific theory taken by itself: Rather, the theory must be taken in conjunction with a whole lot of other hypotheses and background knowledge, which are usually not articulated in detail and may sometimes be quite difficult to specify. A theoretical sentence does not, in general, have any empirical content of its own. This doctrine is called ‘Holism’, which the basic term refers to a variety of positions that have in common a resistance to understanding large unities as merely the sum of their parts, and an insistence that we cannot explain or understand the parts without treating them as belonging to such larger wholes. Some of these issues concern explanation. It is argued, for example, that facts about social classes are not reducible to facts about the beliefs and actions of the agents who belong to them, or it is claimed that we only understand the actions of individuals by locating them in social roles or systems of social meanings.

But, whatever may be the case with under-determination, there is a very closely related problem that scientists certainly do face whenever two rival theories or more encompassing theoretical frameworks are competing for acceptance. This is the problem posed by the fact that one framework, usually the older, longer-established framework can accommodate, that is, produce post hoc explanation of particular pieces of evidence that seem intuitively to tell strongly in favour of the other (usually the new ‘revolutionary’) framework.

For example, the Newtonian particulate theory of light is often thought of as having been straightforwardly refuted by the outcome of experiments - like Young ‘s two-slit experiment - whose results were correctly predicted by the rival wave theory. Duhem’s (1906) analysis of theories and theory testing already shows that this cannot logically have been the case. The bare theory that light consists of some sort of material particle has no empirical consequence s in isolation from other assumptions: And it follows that there must always be assumptions that could be added to the bare corpuscular theory, such that some combined assumptions entail the correct result of any optical experiment. A d indeed, a little historical research soon reveals eighteenth and early nineteenth-century emissionists who suggested at least outline ways in which interference result s could be accommodated within the corpuscular framework. Brewster, for example, suggested that interference might be a physiological phenomenon: While Biot and others worked on the idea that the so-called interference fringes are produced by the peculiarities of the ‘diffracting forces’ that ordinary gross exerts on the light corpuscles.

Both suggestions ran into major conceptual problems. For example, the ‘diffracting force’ suggestion would not even come close to working with any forces of kinds that were taken to operate in other cases. Often the failure was qualitative: Given the properties of forces that were already known about, for example, it was expected that the diffracting force would depend in some way on the material properties of the diffracting object: But, whatever the material of the double-slit screen is Young’s experiment, and whatever its density, the outcome is the same. It could, of course, simply be assumed that the diffracting forces are an entirely novel kind, and that their properties just had to be ‘read-off’ the phenomena - this is exactly the way that corpusularists worked. But, given that this was simply a question of attemptive to write the phenomena into a favoured conceptual framework. And given that the writing-in produced complexities and incongruities for which there was no independent evidence, the majority view was that interference results strongly favour the wave theory, of which they are ‘natural’ consequences. (For example, that the material making up the double slit and its density have no effect at all on the phenomenon is a straightforward consequence of the fact that, as the wave theory says it, the only effect on the screen is to absorb those parts of the wave fronts that impinges on it.)

The natural methodological judgement (and the one that seems to have been made by the majority of competent scientists at that time) is that, even given the interference effects could be accommodated within the corpuscular theory, those effects nonetheless favour the wave account, and favour it in the epistemic sense of showing that theory to be more likely to be true. Of course, the account given by the wave theory of the interference phenomena is also, in certain senses, pragmatically simpler: But this seems generally to have been taken to be, not a virtue in itself, but a reflection of a deeper virtue connected with likely truth.

Consider a second, similar case: That of evolutionary theory and the fossil record. There are well-known disputes about which particular evolutionary account for most support from fossils. Nonetheless, the relative weight the fossil evidence carries for some sort of evolutionist account versus the special creationist theory, is yet well-known for its obviousness - in that the theory of special creation can accommodate fossils: A creationist just needs to claim that what the evolutionist thinks of as bones of animals belonging to extinct species, are, in fact, simply items that God chose to included in his catalogue of the universe’s content at creatures: What the evolutionist thinks of as imprints in the rocks of the skeletons of other such animals are they. It nonetheless surely still seems true intuitively that the fossil records continue to give us better reason to believe that species have evolved from earlier, now extinct ones, than that God created the universe much as it presently is in 4004 Bc. An empiricist-instrumentalist t approach seems committed to the view that, on the contrary, any preference that this evidence yields for the evolutionary account is a purely pragmatic matter.

Of course, intuitions, no matter how strong, cannot stand against strong counter arguments. Van Fraassen and other strong empiricists have produced arguments that purport to show that these intuitions are indeed misguided.

What justifies the acceptance e of a theory? Although h particular versions of empiricism have met many criticisms, that is, of support by the available evidence. How else could empiricists term? : In terms, that is, of support by the available evidence. How else could the objectivity of science be defended except by showing that its conclusion (and in particularly its theoretical conclusions - those theories? It presently on any other condition than that excluding exemplary base on which are somehow legitimately based on agreed observational and experimental evidence, yet, as well known, theoretics in general pose as a problem for empiricism.

Allowing the empiricist the assumptions that there are observational statements whose truth-values can be inter-subjectively agreeing. A definitive formulation of the classical view was finally provided by the German logical positivist Rudolf Carnap (1891-1970), combining a basic empiricism with the logical tools provided by Frége and Russell: And it is his work that the main achievements (and difficulties) of logical positivism are best exhibited. His first major works were Der Logische Aufban der welts (1928, translated as ‘The Logical Structure of the World, 1967) this phenomenological work attempts a reduction of all the objects of knowledge, by generating equivalence classes of sensations, related by a primitive relation of remembrance of similarity. This is the solipsistic basis of the construction of the external world, although Carnap later resisted the apparent metaphysical priority as given to experience. His hostility to metaphysics soon developed into the characteristic positivist view that metaphysical questions are pseudo-problems. Criticism from the Austrian philosopher and social theorist Otto Neurath (1882-1945) shifted Carnap’s interest toward a view of the unity of the sciences, with the concepts and theses of special sciences translatable into a basic physical vocabulary whose protocol statements describe not experience but the qualities of points in space-time. Carnap pursued the enterprise of clarifying the structures of mathematics and scientific language (the only legitimate task for scientific philosophy) in Logische Syntax fer Sprache (1943, translated as, ‘The Logical Syntax of Language’, 1937) refinements to his syntactic and semantic views continued with Meaning and Necessity (1947) while a general loosening of the original ideal of reduction culminated in the great Logical Foundations of Probability, the most important single work of ‘confirmation theory’, in 1950. Other works concern the structure of physics and the concept of entropy.

Wherefore, the observational terms were presumed to be given a complete empirical interpretation, which left the theoretical terms with only an ‘indirect’ empirical interpretation provided by their implicit definition within an axiom system in which some of the terms possessed a complete empirical interpretation.

Among the issues generated by Carnap’s formulation was the viability of ‘the theory-observation distinction’. Of course, one could always arbitrarily designate some subset of nonlogical terms as belonging to the observational vocabulary, however, that would compromise the relevance of the philosophical analysis for any understanding of the original scientific theory. But what could be the philosophical basis for drawing the distinction? Take the predicate ‘spherical’, for example. Anyone can observe that a billiard ball is spherical, but what about the moon, or an invisible speck of sand? Is the application of the term ‘spherical’ of these objects ‘observational’?

Another problem was more formal, as introduced of Craig’s theorem seemed to show that a theory reconstructed in the recommended fashion could be re-axiomatized in such a way as to dispense with all theoretical terms, while retaining all logical consequences involving only observational terms. Craig’s theorem in mathematical logic, held to have implications in the philosophy of science. The logician William Craig showed how, if we partition the vocabulary of a formal system (say, into the ‘T’ or theoretical terms, and the ‘O’ or observational terms), then if there is a fully ‘formalized’ system ‘T’ with some set ‘S’ of consequences containing only the ‘O’ terms, there is also a system ‘O’ containing only the ‘O’ vocabulary but strong enough to give the same set ‘S’ of consequences. The theorem is a purely formal one, in that ‘T’ and ‘O’ simply separate formulae into the preferred ones, containing non-logical terms only one kind of vocabulary, and the others. The theorem might encourage the thought that the theoretical terms of a scientific theory are in principle, dispensable, since the same consequences can be derived without them.

However, Craig’s actual procedure gives no effective way of dispensing with theoretical terms in advance, i.e., in the actual process of thinking about and designing the premises from which the set ‘S’ follows, in this sense ‘O’ remains parasitic upon its parent ‘T’.

Thus, as far as the ‘empirical’ content of a theory is concerned, it seems that we can do without the theoretical terms. Carnap’s version of the classical view seemed to imply a form of instrumentation. A problem which the German philosopher of science, Carl Gustav Hempel (1905-97) christened ‘the theoretician’s dilemma’.

Meanwhile Descartes identification of matter with extension, and his comitans theory of all of space as filed by a plenum of matter. The great metaphysical debate over the nature of space and time has its roots in the scientific revolution of the sixteenth and seventeenth centuries. An early contribution to the debate was the French mathematician and founding father of modern philosophy, Réne Descartes (1596-1650). His interest in the methodology of a unified science culminated in his first work, the Regulae ad Directionem Ingenti (1628/9), was never completed. Nonetheless, between 1628 and 1649, Descartes first wrote and then cautiously suppressed, Le Monde (1634) and in 1637 produced the Discours de la Méthode as a preface to the treatise on mathematics and physics in which he introduced the notion of Cartesian coordinates.

His best known philosophical work, the Meditationes de Prima Philosophia (Meditations of First Philosophy), together with objections by distinguished contemporaries and relies by Descartes (the Objections and Replies) appeared in 1641. The author of the objections is First advanced, by the Dutch theologian Johan de Kater, second set, Mersenne, third set, Hobbes: Fourth set, Arnauld, fifth set, Gassendim, and sixth set, Mersnne. The second edition (1642) of the Meditations included a seventh set by the Jesuit Pierre Bourdin. Descartes’s penultimate work, the Principia Philosophiae (Principles of Philosophy) of 1644 was designed partly for use in theological textbooks: His last work was Les Passions de I áme (the Passions of the Soul) and published in 1649. In that year Descartes visited the court of Kristina of Sweden, where he contracted pneumonia, allegedly through being required to break his normal habit of a late rising in order to give lessons at 5:00 a.m. His last words are supposed ‘Ça, mon sme il faut partur’, - ‘So my soul, it is time to part’.

It is nonetheless said, that the great metaphysical debate over the nature of space and time has its roots in the scientific revolution of the sixteenth and seventeenth centuries. An early contribution to the debate was Réne Descartes’s (1596-1650), identification of matter with extension, and his comitant theory of all of space as filled by a plenum of matter.

Far more profound was the German philosopher, mathematician and polymath, Wilhelm Gottfried Leibniz (1646-1716), whose characterization of a full-blooded theory of relationism with regard to space and time, as Leibniz elegantly puts his view: ‘Space is nothing but the order of coexistence . . . time is the order of inconsistent t possibilities’. Space was taken to be a set of relations among material objects. The deeper monadological view to the side, were the substantival entities, no room was provided for space itself as a substance over and above the material substance of the world. All motion was then merely relative motion of one material thing in the reference frame fixed by another. The Leibnizian theory was one of great subtlety. In particular, the need for a modalized relationism to allow for ‘empty space’ was clearly recognized. An unoccupied spatial location was taken to be a spatial relation that could be realized but that was not realized in actuality. Leibniz also offered trenchant arguments against substantivalism. All of these rested upon some variant of the claim that a substantival picture of space allows for the theoretical toleration of alternative world models that are identical as far as any observable consequences are concerned.

Contending with Leibnizian relationalism was the ‘substantivalism’ of Isaac Newton (1642-1727), and his disciple S. Clarke, thereby he is mainly remembered for his defence of Newton (a friend from Cambridge days) against Leibniz, both on the question of the existence of absolute space and the question of the propriety of appealing to a force of gravity, actually Newton was cautious about thinking of space as a ‘substance’. Sometimes he suggested that it be thought of, rather, as a property - in particular as a property of the Deity. However, what was essential to his doctrine was his denial that a relationist theory, with its idea of motion as the relative change of position of one material object with respect to another, can do justice to the facts about motion made evident by empirical science and by the theory that does justice to those facts.



The Newtonian account of motion, like Aristotle’s, has a concept of natural or unforced motion. This is motion with uniform speed in a constant direction, so-called inertial motion. There is, then, in this theory an absolute notion of constant velocity motion. Such constant velocity motions cannot be characterized as merely relative to some material objects, some of which will be non-inertial. Space itself, according to Newton, must exist as an entity over and above the material objects of the world. In order to provide the standard of rest relative to which uniform motion is genuine inertial motion.

Such absolute uniform motions can be empirically discriminated from absolutely accelerated motion by the absence of inertial forces felt when the test object is moving genuinely inertially. Furthermore, the application of force to an object is correlated with the object’s change of absolute motion. Only uniform motions relative to space itself are natural motions requiring no force and explanation. Newton also clearly saw that the notion of absolute constant speed requires a motion of absolute time, for, relative to an arbitrary cyclic process as defining the time scale, any motion can be made uniform or not, as we choose. Nonetheless, genuine uniform motions are of constant speed in the absolute time scale fixed by ‘time itself; . Periodic processes can be at best good indicators of measures of this flaw of absolute time.

Newton’s refutation of relationism by means of the argument from absolute acceleration is one of the most distinctive examples of the way in which the results of empirical experiment and of the theoretical efforts to explain these results impinge on or upon philosophical objections to Leibnizian relationism - for example, in the claim that one must posit a substantival space to make sense of Leibniz’s modalities of possible position - it is a scientific objection to relationism that causes the greatest problems for that philosophical doctrine.

Then, again, a number of scientists and philosophers continued to defend the relationist account of space in the face of Newton’s arguments for substantivalism. Among them were Wilhelm Gottfried Leibniz, Christian Huygens, and George Berkeley when in 1721 Berkeley published De Motu (‘On Motion’) attacking Newton ‘s philosophy of space, a topic he returned too much later in The Analyst of 1734.the empirical distinction, however, to frustrate their efforts.

In the nineteenth century, the Austrian physicist and philosopher Ernst Mach (1838-1916), made the audacious proposal that absolute acceleration might be viewed as acceleration relative not to a substantival space, but to the material reference frame of what he called the ‘fixed stars’ - that is, relative to a reference frame fixed by what might now be called the ‘average smeared-out mass of the universe’. As far as observational data went, he argued, the fixed stars could be taken to be the frames relative to which uniform motion was absolutely uniform. Mach’s suggestion continues to play an important role in debates up to the present day.

The nature of geometry as an apparently a priori science also continued to receive attention. Geometry served as the paradigm of knowledge for rationalist philosophers, especially for Descartes and the Dutch Jewish rationalist Benedictus de Spinoza (1632-77), whereby the German philosopher Immanuel Kant (1724-1804) attempts to account for the ability of geometry to go beyond the analytic truths of logic extended by definition - was especially important. His explanation of the a priori nature of geometry by its ‘transcendentally psychological’ nature - that is, as descriptive of a portion of mind’s organizing structure imposed on the world of experience - served as his paradigm for legitimated a priori knowledge in general.

A peculiarity of Newton’s theory, of which Newton was well aware, was that whereas acceleration with respect to space itself had empirical consequences, uniform velocity with respect to space itself had none. The theory of light, particularly in J.C. Maxwell’s theory of electromagnetic waves, suggested, however, that there was only one reference frame in which the velocity of light would be the same in all directions, and that this might be taken to be the frame at rest in ‘space itself’. Experiments designed to find this frame seen to sow, however, that light velocity is isotropic and has its standard value in all frames that are in uniform motion in the Newtonian sense. All these experiments, however, measured only the average velocity of the light relative to the reference frame over a round-trip path.



It was the insight of the German physicist Albert Einstein (1879-1955) who took the apparent equivalence of all inertial frames with respect to the velocity of light to be a genuine equivalence, It was from an employment within the Patent Office in Bern, wherefrom in 1905 he published the papers that laid the foundation of his reputation, on the photoelectric theory of relativity. In 1916 he published the general theory and in 1933 Einstein accepted the position at the Princeton Institute for Advanced Studies which he occupied for the rest of his life. His deepest insight was to see that this required that we relativize the notion of the simultaneity of events spatially separated from one distanced between a non-simultaneous event’s reference frame. For any relativist, the distance between non-simultaneous events simultaneity is relative as well. This theory of Einstein’s later became known as the Special Theory of Relativity.

Eienstein’s proposal account for the empirical undetectability of the absolute rest frame by optical experiments, because in his account the velocity of light is isotropic and has its standard value in all inertial frames. The theory had immediate kinematic consequences, among them the fact that spatial separation (lengths) and intervals are frame-of motion-relative. A new dynamics was needed if dynamics were to be, as it was for Newton, equivalence in all inertial frames.

Einstein’s novel understanding of space and time was given an elegant framework by H. Minkowski in the form of Minkowski Space-time. The primitive elements of the theory were point-like. Locations in both space and time of unextended happenings. These were called the ‘event locations’ or the ‘events’‘ of a four-dimensional manifold. There is a frame-invariant separation of an event frame event called the ‘interval’. But the spatial separation between two noncoincident events, as well as their temporal separation, are well defined only relative to a chosen inertial reference frame. In a sense, then, space and time are integrated into a single absolute structure. Space and time by themselves have a derivative and relativized existence.

Whereas the geometry of this space-time bore some analogies to a Euclidean geometry of a four-dimensional space, the transition from space and time by them in an integrated space-time required a subtle rethinking of the very subject matter of geometry. ‘Straight lines’ are the straightest curves of this ‘flat’ space-time, however, they include ‘null straight lines’, interpreted as the events in the life history of a light ray in a vacuum and ‘time-like straight lines’, interpreted as the collection of events in the life history of a free inertial contribution to the revolution in scientific thinking into the new relativistic framework. The result of his thinking was the theory known as the general theory of relativity.

The heuristic basis for the theory rested on or upon an empirical fact known to Galileo and Newton, but whose importance was made clear only by Einstein. Gravity, unlike other forces such as the electromagnetic force, acts on all objects independently of their material constitution or of their size. The path through space-time followed by an object under the influence of gravity is determined only by its initial position and velocity. Reflection upon the fact that in a curved spac e the path of minimal curvature from a point, the so-called ‘geodesic’, is uniquely determined by the point and by a direction from it, suggested to Einstein that the path of as an object acted upon by gravity can be thought of as a geodesic followed by that path in a curved space-time. The addition of gravity to the space-time of special relativity can be thought of s changing the ‘flat’ space-time of Minkowski into a new, ‘curved’ space-time.

The kind of curvature implied by the theory in that explored by B. Riemann in his theory of intrinsically curved spaces of an arbitrary dimension. No assumption is made that the curved space exists in some higher-dimensional flat embedding space, curvature is a feature of the space that shows up observationally in those in the space longer straight lines, just as the shortest distances between points on the Earth’s surface cannot be reconciled with putting those points on a flat surface. Einstein (and others) offered other heuristic arguments to suggest that gravity might indeed have an effect of relativistic interval separations as determined by measurements using tapes’ spatial separations and clocks, to determine time intervals.

The special theory gives a unified account of the laws of mechanics and of electromagnetism (including optics). Before 1905 the purely relative nature of uniform motion had in part been recognized in mechanics, although Newton had considered time to be absolute and also postulated absolute space. In electromagnetism the ‘ether’ was supposed to provide an absolute basis with respect to which motion could be determined and made two postulates. (The laws of nature are the same for all observers in uniform relative e motion. (2) The speed of light is the same for all such observes, independently of the relative motion of sources and detectors. He showed that these postulates were equivalent to the requirement that coordinates of space and time was used by different observers should be related by the ‘Lorentz Transformation Equation Theory’. The theory has several important consequences.

That is to say, a set of equations for transforming the position-motion parameter from an observer at point 0(x, y, z) to an observer at 0'(x’, y’, z’), moving relative to one another. The equations replace the ‘Galilean transformation equations of Newtonian mechanics in Relative problems. If the x-axis are chosen to pass through 00' and the time of an even t is (t) and (t’) in the frame of reference of the observer at 0 and 0' respectively y (where the zeros of their time scales were the instants that 0 and 0' coincided) the equations are:

x’ = β(x - vt)

y’ = y

z’ = z

t’ = β(t - vx/c2),

Where v is the relative velocity y of separation of 0, 0', c is the speed of light, and β is the function (1 - v2/c2).

The transformation of time implies that two events that are simultaneous according to one observer will not necessarily be so according to another in uniform relative motion. This does not affect in any way violate any concepts of causation. It will appear to two observers in uniform relative motion that each other’s clock rums slowly. This is the phenomenon of ‘time dilation’, for example, an observer moving with respect to a radioactive source finds a longer decay time than that found by an observer at rest with respect to it, according to:

Tv = T0/(1 - v2/c2)½,

Where Tv is the mean life measured by an observer at relative speed v. T0 is the mean life measured by an observer relatively at rest, and c is the speed of light.

Among the results of the ‘exact’ form optics is the deduction of the exact form io f the Doppler Effect. In relativity mechanics, mass, momentum and energy are all conserved. An observer with speed v with respect to a particle determines its mass to be m while an observer at rest with respect to the [article measure the ‘rest mass’ m0, such that:

m = m0/(1 - v2/c2)½

This formula has been verified in innumerable experiments. One consequence is that no body can be accelerated from a speed below c with respect to any observer to one above c, since this would require infinite energy. Einstein deduced that the transfer of energy δE by any process entailed the transfer of mass δm, where δE = δmc2, hence he concluded that the total energy E of any system of mass m would be given by:

E = mc2

The kinetic energy of a particle as determined by an observer with relative speed v is thus (m - m0)c2, which tends to the classical value ½mv2 if v ≪c.

Attempts to express Quantum Theory in terms consistent with the requirements of relativity were begun by Sommerfeld (1915). Eventually Dirac (1928) gave a relativistic formulation of the wave mechanics of conserved particles (fermions). This explained the concepts of sin and the associated magnetic moment for certain details of spectra. The theory led to results of elementary particles, the theory of Beta Decay, and for Quantum statistics, the Klein-Gordon Equation is the relativistic wave equation for ‘bosons’.

A mathematical formulation of the special theory of relativity was given by Minkowski. It is based on the idea that an event is specified by four coordinates: Three spatial coordinates and one of time. These coordinates define a four-dimensional space and time motion of a particle can be described by a curve in this space, which is called ‘Minkowski space-time’.

The special theory of relativity is concerned with relative motion between non-accelerated frames of reference. The general theory deals with general relative motion between accelerated frames of reference. In accelerated systems of reference, certain fictitious forces are observed, such as the centrifugal and Coriolis forces found in rotating systems. These are known as fictitious forces because they disappear when the observer transforms to a non-accelerated system. For example, to an observer in a car rounding a bend at constant velocity, objects in the car appear to suffer a force acting outwards. To an observer outside the car, this is simply their tendency to continue moving in a straight line. The inertia of the objects is seen to cause a fictitious force and the observer can distinguish between non-inertial (accelerated) and inertial (non-accelerated) frames of reference.

A further point is that, to the observer in the car, all the objects are given the same acceleration irrespective of their mass. This implies a connection between the fictitious forces arising from accelerated systems and forces due to gravity, where the acceleration produced is independent of the mass. For example, a person in a sealed container could not easily determine whether he was being driven toward the floor by gravity of if the container were in space and being accelerated upwards by a rocket. Observations extended between these alternatives, but otherwise they are indistinguishable from which it follows that the inertial mass is the same as a gravitational mass.

The equivalence between a gravitational field and the fictitious forces in non-inertial systems can be expressed by using ‘Riemannian space-time’, which differs from Minkowski space-time of the special theory. In special relativity the motion of a particle that is not acted on by any forces is presented by a straight line in Minkowski space-time. In general relativity, using Riemannian space-time, the motion is presented by a line that is no longer straight (in the Euclidean sense) but is the line giving the shortest distance. Such a line is called a ‘geodesic’. Thus, space-time is said to be curved. The extent of this curvature is given by the ‘metric tensor’ for space-time, the components of which are solutions to Einstein’s ‘field equations’. The fact that gravitational effects occur near masses is introduced by the postulate that the presence e of matter produces this curvature of space-time. This curvature of space-time controls the natural motions of bodies.

The predictions of general relativity only differ from Newton’s theory by small amounts and most tests of the theory have been carried out through observations in astronomy. For example, it explains the shift on the perihelion of Mercury, the bending of light in the presence of large bodies, and the Einstein shift. Very close agreements between their accurately measured values have now been obtained.

So, then, using the new space-time notions, a ’curved space-time’ theory of Newtonian gravitation can be constructed. In this space-time is absolute, as in Newton. Furthermore, space remains flat Euclidean space. This is unlike the general theory of relativity, where the space-time curvature can induce spatial curvature as well. But the space-time curvature of this ‘curved neo-Newtonian space-time’ shows up in the fact that particles under the influence of gravity do not follow straight lines paths. Their paths become, as in general relativity, the curved time-like geodesics of the space-time. In this curved space-time account of Newtonian gravity, as in the general theory of relativity, the indistinguishable alternative worlds of theories that take gravity as a force s superimposed in a flat space-time collapsed to a single world model.

The strongest impetus to rethink epistemological issues in the theory of space and time came from the introduction of curvature and of non-Euclidean geometries in the general theory of relativity. The claim that a unique geometry could be known to hold true of the world a priori seemed unviable, at least in its naive form. In a situation where our best available physical theory allowed for a wide diversity of possible geometries for the world and in which the geometry of space-time was one more dynamical element joining the other ‘variable’ features of the world. Of course, skepticism toward an a priori account of geometry could already have been induced by the change from space time to space-time in the special theory, even though the space of that world remained Euclidean.

The natural response to these changes in physics was to suggest that geometry was, like all other physical theories, believable only on the basis of some kind of generalizing inference from the law-like regularities among the observable observational data - that is, to become an empiricists with regard to geometry.

But a defence of a kind of a priori account had already been suggested by the French mathematician and philosopher Henri Jules Poincaré (1854-1912), even before the invention of the relativistic theories. He suggested that the limitation of observational data to the domain of what was both material and local, i.e., or, space-time in order to derive a geometrical world of matter and convention or decision on the part of the scientific community. If any geometric posit could be made compatible with any set of observational data, Euclidean geometry could remain a priori in the sense that we could, conventionally, decide to hold to it as the geometry of the world in the face of any data that apparently refuted it.

The central epistemological issue in the philosophy of space and time remains that of theoretical under-determination, stemming from the Poincaré argument. In the case of the special theory of relativity the question is the rational basis for choosing Einstein’s theory over, for example, on of the ‘aether reference frame plus modification of rods and clocks when they are in motion with respect to the aether’ theories tat it displaced. Among the claims alleged to be true merely by convention in the theory, for which of asserting the simultaneity of distant events, those asserting the ‘flatness’ of the chosen space-time. Crucial to the fact that Einstein’s arguments themselves presuppose a strictly delimited local observation basis for the theories and that in fixing on or upon the special theory of relativity, one must make posits about the space-time structure y that outrun the facts given strictly by observation. In the case of the general theory of relativity, the issue becomes one of justifying the choice of general relativity over, for example, a flat space-time theory that treats gravity, as it was treated by Newton, as a ‘field of force’ over and above the space-time structure.

In both the cases of special and general relativity, important structural features pick out the standard Einstein theories as superior to their alternatives. In particular, the standard relativistic models eliminate some of the problems of observationally equivalent but distinguishable worlds countenanced by the alternative theories. However, the epistemologists must still be concerned with the question as to why these features constitute grounds for accepting the theories as the ‘true’ alternatives.

Other deep epistemological issues remain, having to do with the relationship between the structures of space and time posited in our theories of relativity and the spatiotemporal structures we use to characterize our ‘direct perceptual experience’. These issues continue in the contemporary scientific context the old philosophical debates on the relationship between the ram of the directly perceived and the realm of posited physical nature.

First reaction on the part of some philosophers was to take it that the special theory of relativity provided a replacement for the Newtonian theory of absolute space that would be compatible with a relationist account of the nature of space and time. This was soon seen to be false. The absolute distinction between uniform moving frames and frames not in or upon its uniform motion, invoked by Newton in his crucial argument against relationism, remains in the special theory of relativity. In fact, it becomes an even deeper distinction than it was in the Newtonian account, since the absolutely uniformly moving frames, the inertial frames, now become not only the frames of natural unforced motion, but also the only frames in which the velocity of light is isotropic.

At least part of the motivation behind Einstein’s development of the general theory of relativity was the hope that in this new theory all reference frames, uniformly moving or accelerated, would be ‘equivalent’ to one another physically. It was also his hope that the theory would conform to the Machian idea of absolute acceleration as merely acceleration relative to the smoothed-out matter of the universe.

Further exploration of the theory, however, showed that it had many features uncongenial to Machianism. Some of these are connected with the necessity of imposing boundary conditions for the equation connecting the matter distribution of the space-time structure. General relativity certainly allows as solutions model universes of a non-Machian sort - for example, those which are aptly described as having the smoothed-out matter of the universe itself in ‘absolute rotation’. There are strong arguments to suggest that general relativity. Like Newton’s theory and like special relativity, requires the positing of a structure of ‘space-time itself’ and of motion relative to that structure, in order to account for the needed distinctions of kinds of motion in dynamics. Whereas in Newtonian theory it was ‘space itself’ that provided the absolute reference frames. In general relativity it is the structure of the null and time-like geodesics that perform this task. The compatibility of general relativity with Machian ideas is, however, a subtle matter and one still open to debate.

Other aspects of the world described by the general theory of relativity argue for a substantivalist reading of the theory as well. Space-time has become a dynamic element of the world, one that might be thought of as ‘causally interacting’ with the ordinary matter of the world. In some sense one can even attribute energy (and hence mass) to the spacer-time (although this is a subtle matter in the theory), making the very distinction between ‘matter’ and ‘spacer-time itself’ much more dubious than such a distinction would have been in the early days of the debate between substantivalists and explanation forthcoming from the substantivalist account is.

Nonetheless, a naive reading of general relativity as a substantivalist theory has its problems as well. One problem was noted by Einstein himself in the early days of the theory. If a region of space-time is devoid of non-gravitational mass-energy, alternative solutions to the equation of the theory connecting mass-energy with the space-time structure will agree in all regions outside the matterless ‘hole’, but will offer distinct space-time structures within it. This suggests a local version of the old Leibniz arguments against substantivalism. The argument now takes the form of a claim that a substantival reading of the theory forces it into a strong version of indeterminism, since the space-time structure outside the hld fails to fix the structure of space-time in the hole. Einstein’s own response to this problem has a very relationistic cast, taking the ‘real facts’ of the world to be intersections of paths of particles and light rays with one another and not the structure of ‘space-time itself’. Needless to say, there are substantival attempts to deal with the ‘hole’ argument was well, which try to reconcile a substantival reading of the theory with determinism.

There are arguments on the part of the relationist to the effect that any substantivalist theory, even one with a distinction between absolute acceleration and mere relative acceleration, can be given a relationistic formulation. These relationistic reformations of the standard theories lack the standard theories’ ability to explain why non-inertial motion has the features that it does. But the relationist counters by arguing that the explanation forthcoming from the substantivalist account is too ‘thin’ to have genuine explanatory value anyway.

Relationist theories are founded, as are conventionalist theses in the epistemology of space-time, on the desire to restrict ontology to that which is present in experience, this taken to be coincidences of material events at a point. Such relationist conventionalist account suffers, however, from a strong pressure to slide full-fledged phenomenalism.

As science progresses, our posited physical space-times become more and more remote from the space-time we think of as characterizing immediate experience. This will become even more true as we move from the classical space-time of the relativity theories into fully quantized physical accounts of space-time. There is strong pressure from the growing divergence of the space-time of physics from the space-time of our ‘immediate experience’ to dissociate the two completely and, perhaps, to stop thinking of the space-time of physics for being anything like our ordinary notions of space and time. Whether such a radical dissociation of posited nature from phenomenological experience can be sustained, however, without giving up our grasp entirely on what it is to think of a physical theory ‘realistically’ is an open question.

Science aims to represent accurately actual ontological unity/diversity. The wholeness of the spatiotemporal framework and the existence of physics, i.e., of laws invariant across all the states of matter, do represent ontological unities which must be reflected in some unification of content. However, there is no simple relation between ontological and descriptive unity/diversity. A variety of approaches to representing unity are available (the formal-substantive spectrum and respective to its opposite and operative directions that the range of naturalisms). Anything complex will support man y different partial descriptions, and, conversely, different kinds of thing s many all obey the laws of a unified theory, e.g., quantum field theory of fundamental particles or collectively be ascribed dynamical unity, e.g., self-organizing systems.

It is reasonable to eliminate gratuitous duplication from description - that is, to apply some principle of simplicity, however, this is not necessarily the same as demanding that its content satisfies some further methodological requirement for formal unification. Elucidating explanations till there is again no reason to limit the account to simple logical systemization: The unity of science might instead be complex, reflecting our multiple epistemic access to a complex reality.

Biology provides as useful analogy. The many diverse species in an ecology nonetheless, each map, genetically and cognitively, interrelatable aspects of as single environment and share exploitation of the properties of gravity, light, and so forth. Though the somantic expression is somewhat idiosyncratic to each species, and the incomplete representation, together they form an interrelatable unity, a multidimensional functional representation of their collective world. Similarly, there are many scientific disciplines, each with its distinctive domains, theories, and methods specialized to the condition under which it accesses our world. Each discipline may exhibit growing internal metaphysical and nomological unities. On occasion, disciplines, or components thereof, may also formally unite under logical reduction. But a more substantive unity may also be manifested: Though content may be somewhat idiosyncratic to each discipline, and the incomplete representation, together the disciplinary y contents form an interrelatable unity, a multidimensional functional representation of their collective world. Correlatively, a key strength of scientific activity lies, not formal monolithicity, but in its forming a complex unity of diverse, interacting processes of experimentations, theorizing, instrumentation, and the like.

While this complex unity may be all that finite cognizers in a complex world can achieve, the accurate representation of a single world is still a central aim. Throughout the history of physics. Significant advances are marked by the introduction of new representation (state) spaces in which different descriptions (reference frames) are embedded as some interrelatable perspective among many thus, Newtonian to relativistic space-time perspectives. Analogously, young children learn to embed two-dimensional visual perspectives in a three-dimensional space in which object constancy is achieved and their own bodies are but some among many. In both cases, the process creates constant methodological pressure for greater formal unity within complex unity.

The role of unity in the intimate relation between metaphysics and metho in the investigation of nature is well-illustrated b y the prelude to Newtonian science. In the millennial Greco-Christian religion preceding the founder of modern astronomy, Johannes Kepler (1571-1630), nature was conceived as essentially a unified mystical order, because suffused with divine reason and intelligence. The pattern of nature was not obvious, however, a hidden ordered unity which revealed itself to a diligent search as a luminous necessity. In his Mysterium Cosmographicum, Kepler tried to construct a model of planetary motion based on the five Pythagorean regular or perfect solids. These were to be inscribed within the Aristotelian perfect spherical planetary orbits in order, and so determine them. Even the fact that space is a three-dimensional unity was a reflection of the one triune God. And when the observational facts proved too awkward for this scheme. Kepler tried instead, in his Harmonice Mundi, to build his unified model on the harmonies of the Pythagorean musical scale.

Subsequently, Kepler trod a difficult and reluctant path to the extraction of his famous three empirical laws of planetary motion: Laws that made Newtonian revolution possible, but had none of the elegantly simple symmetries that mathematical mysticism required. Thus, we find in Kepler both the medieval methods and theories of metaphysically y unified religio-mathematical mysticism and those of modern empirical observation and model fitting. A transition figures in the passage to modern science.

To appreciate both the historical tradition and the role of unity in modern scientific method, consider Newton’s methodology, focussing just on Newton’s derivation of the law of universal gravitation in Principia Mathematica, book iii. The essential steps are these: (1) The experimental work of Kepler and Galileo (1564-1642) is appealed to, so as to establish certain phenomena, principally Kepler’s laws of celestial planetary motion and Galileo’s terrestrial law of free fall. (2) Newton’s basic laws of motion are applied to the idealized system of an object small in size and mass moving with respect to a much larger mass under the action of a force whose features are purely geometrically determined. The assumed linear vector nature of the force allows construction of the centre of a mass frame, which separates out relative from common motions: It is an inertial frame (one for which Newton’s first law of motion holds), and the construction can be extended to encompass all solar system objects.

(3) A sensitive equivalence is obtained between Kepler’s laws and the geometrical properties of the force: Namely, that it is directed always along the line of centres between the masses, and that it varies inversely as the square of the distance between them. (4) Various instances of this force law are obtained for various bodies in the heavens - for example, the individual planets and the moons of Jupiter. From this one can obtain several interconnected mass ratios - in particular, several mass estimates for the Sun, which can be shown to cohere mutually. (5) The value of this force for the Moon is shown to be identical to the force required by Galileo’s law of free fall at the Earth’s surface. (6) Appeal is made again to the laws of motion (especially the third law) to argue that all satellites and falling bodies are equally themselves sources of gravitational force. (7) The force is then generalized to a universal gravitation and is shown to explain various other phenomena - for example, Galileo’s law for pendulum action is shown suitably small, thus leaving the original conclusions drawn from Kepler’s laws intact while providing explanations for the deviations.

Newton’s constructions represent a great methodological, as well as theoretical achievement. Many other methodological components besides unity deserve study in their own right. The sense of unification is here that a deep systemization, as given the laws of motion, the geometrical form of the gravitational force and all its significant parameters needed for a complete dynamical description - that is, the component G, of the geometrical form of gravity Gm1m2/rn, - are uniquely determined from phenomenons and, after the of universal gravitation has been derived, it plus the laws of motion determine the space and time frames and a set of self-consistent attributions of mass. For example, the coherent mass attributions ground the construction of the locally inertial ventre of a mass frame, and Newton’s first law then enables us to consider time as a magnitude e: Equal tomes are those during which a freely moving body transverses equal distances. The space and time frames in turn ground use of the laws of motion, completing the constructive circle. This construction has a profound unity to it, expressed by the multiple interdependency of its components, the convergence of its approximations, and the coherence of its multiplying determined quantized. Newton’s Rule IV: (Loosely) do not introduce a rival theory unless it provides an equal or superior unified construction - in particular, unless it is able to measure its parameters in terms of empirical phenomena at least as thorough and cross-situationally invariably (Rule III) as done in current theory. this gives unity a central place in scientific method.

Kant and Whewell seized on this feature as a key reason for believing that the Newtonian account had a privileged intelligibility and necessity. Significantly, the requirement to explain deviations from Kepler’s laws through gravitational perturbations has its limits, especially in the cases of the Moon and Mercury: These need explanations. The former through the complexities of n-body dynamics (which may even show chaos) and the latter through relativistic theory. Today we no longer accept the truth, let alone the necessity, of Newton’s theory. Nonetheless, it remains a standard of intelligibility. It is in this role that it functioned, not jus t for Kant, but also for Reichenbach, and later Einstein and even Bohr: Their sense of crisis with regard to modern physics and their efforts to reconstruct it is best seen as stemming from their acceptance of an essentially recognition of the falsification o this ideal by quantum theory. Nonetheless, quantum theory represents a highly unified, because symmetry-preserving, dynamics, reveals universal constants, and satisfies the requirement of coherent and invariant parameter determinations.

Newtonian method provides a central, simple example of the claim that increased unification brings increased explanatory power. A good explanation increases our understanding of the world. And clearly a convincing story an do this. Nonetheless, we have also achieved great increases in our understanding of the world through unification. Newton was able to unify a wide range of phenomena by using his three laws of motion together with his universal law of gravitation. Among other things he was able to account for Johannes Kepler’s three was of planetary motion, the tides, the motion of the comets, projectile motion and pendulums. Still, his laws of planetary motion are the first mathematical, scientific, laws of astronomy of the modern era. They state (1) that the planets travel in elliptical orbits, with one focus of the ellipse being the sun. (2) That the radius between sun and planets sweeps equal areas in equal time, and (3) that the squares of the periods of revolution of any two planets are in the same ratio as the cube of their mean distance from the sun.

we have explanations by reference of causation, to identities, to analogies, to unification, and possibly to other factors, yet philosophically we would like to find some deeper theory that explains what it was about each of these apparently diverse forms of explanation that makes them explanatory. This we lack at the moment. Dictionary definitions typically explicate the notion of explanation in terms of understanding: An explanation is something that gives understanding or renders something intelligible. Perhaps this is the unifying notion. The different types of explanation are all types of explanation in virtue of their power to give understanding. While certainly an explanation must be capable of giving an appropriately tutored person a psychological sense of understanding, this is not likely to be a fruitful way forward. For there is virtually no limit to what has been taken to give understanding. Once upon a time, many thought that the facts that there were seven virtues and seven orifices of the human head gave them an understanding of why there were (allegedly) only seven planets. we need to distinguish between real and spurious understanding. And for that we need a philosophical theory of explanation that will give us the hall-mark of a good explanation.

In recent years, there has been a growing awareness of the pragmatic aspect of explanation. What counts as a satisfactory explanation depends on features of the context in which the explanation is sought. Willy Sutton, the notorious bank robber, is alleged to have answered a priest’s question, ‘Why do you rob banks’? By saying ‘That is where the money is’, we need to look at the context to be clear about for what exactly of an explanation is being sought. Typically, we are seeking to explain why something is the case than something else. The question which Willy’s priest probably had in mind was: ‘Why do you rob banks rather than have a socially worthwhile jobs’? And not the question ‘Why do you rob banks rather than have a socially worthwhile jobs’? And not the question ‘Why do you rob banks rather than churches’? we also need to attend to the background information possessed by the questioner. If we are asked why a certain bird has a long beaks, it is no use answering (as the D-N approach might seem to license) that the birds are an Aleutian tern and all Aleutian terns have long beaks if the questioner already knows that it is an Aleutian tern. A satisfactory answer typically provides new information. In this case, however, the speaker may be looking for some evolutionary account of why that species has evolved long beaks. Similarly, we need to attend to the level of sophistication in the answer to be given. we do not provide the same explanation of some chemical phenomena to a school child as to a student of quantum chemistry.

Van Fraassen whose work has been crucially important in drawing attention to the pragmatic aspects of exaltation has gone further in advocating a purely pragmatic theory of explanation. A crucial feature of his approach is a notion of relevance. Explanatory answers to ‘why’ questions must be relevant but relevance itself is a function of the context for van Fraassen. For that reason he has denied that it even makes sense to talk of the explanatory power of a theory. However, his critics (Kitcher and Salmon) pint out that his notion of relevance is unconstrained, with the consequence that anything can explain anything. This reductio can be avoided only by developing constraints on the relation of relevance, constraints that will not be a functional forming context, hence take us away from a purely pragmatic approach to explanation.

The resolving result is increased explanatory power for Newton’s theory because of the increased scope and robustness of its laws, since the data pool which now supports them is the largest and most widely accessible, and it brings its support to bear on a single force law with only two adjustable, multiply determined parameters (the masses). Call this kind of unification (simpler than full constructive unification) ‘coherent unification’. As much has been made of these ideas in recent philosophy of method, representing something of a resurgence of the Kant-Whewell tradition.

Unification of theories is achieved when several theories T1, T2, . . . Tn previously regarded s distinct are subsumed into a theory of broader scope T*. Classical examples are the unification of theories of electricity, magnetism, and light into Maxwell’s theory of electrodynamics. And the unification of evolutionary and genetic theory in the modern synthetic thinking.

In some instances of unification, T* logically entails T1, T2, . . . Tn under particular assumptions. This is the sense in which the equation of state for ideal gases: pV = nRT, is a unification of Boyle’s law, pV = constant for constant temperature, and Charle’s law, V/T = constant for constant pressure. Frequently, however, the logical relations between theories involve in unification are less straightforward. In some cases, the claims of T* strictly contradict the claim of T1, T2, . . . Tn. For instance, Newton’s inverse-square law of gravitation is inconsistent with Kepler’s laws of planetary motion and Galileo’s law of free fall, which it is often said to have unified. Calling such an achievement ‘unification’ may be justified by saying that T* accounts on its own for the domains of phenomena that had previously been treated by T1, T2, . . . Tn. In other cases described as unification, T* uses fundamental concepts different from those of T1, T2, . . . Tn so the logical relations among them are unclear. For instance, the wave and corpuscular theories of light are said to have been unified in quantum theory, but the concept of the quantum particle is alien to classical theories. Some authors view such cases not as a unification of the original T1, T2, . . . Tn, but as their abandonment and replacement by a wholly new theory T* that is incommensurable with them.

Standard techniques for the unification of theories involve isomorphism and reduction. The realization that particular theories attribute isomorphic structures to a number of different physical systems may point the way to a unified theory that attributes the same structure to all such systems. For example, all instances of wave propagation are described by the wave equation:

∂2y/∂x2 = (∂2y/∂t2)/v2

Where the displacement y is given different physical interpretations in different instances. The reduction of some theories to a lower-level theory, perhaps through uncovering the micro-structure of phenomena, may enable the former to be unified into the latter. For instance, Newtonian mechanics represent a unification of many classical physical theories, extending from statistical thermodynamics to celestial mechanics, which portray physical phenomena as systems of classical particles in motion.

Alternative forms of theory unification may be achieved on alternative principles. A good example is provided by the Newtonian and Leibnizian programs for theory unification. The Newtonian program involves analysing all physical phenomena as the effects of forces between particles. Each force is described by a causal law, modelled on the law of gravitation. The repeated application of these laws is expected to solve all physical problems, unifying celestial mechanics with terrestrial dynamics and the sciences of solids and of fluids. By contrast, the Leibnizian program proposes to unify physical science on the basis of abstract and fundamental principles governing all phenomena, such as principles of continuity, conservation, and relativity. In the Newtonian program, unification derives from the fact that causal laws of the same form apply to every event in the universe: In the Leibnizian program, it derives from the fact that a few universal principles apply to the universe as a whole. The Newtonian approach was dominant in the eighteenth and nineteenth centuries, but more recent strategies to unify physical sciences have hinged on or upon the formulating universal conservation and symmetry principles reminiscent of the Leibnizian program.

There are several accounts of why theory unification is a desirable aim. Many hinge on simplicity considerations: A theory of greater generality is more informative than a set of restricted theories, since we need to gather less information about a state of affairs in order to apply the theory to it. Theories of broader scope are preferable to theories of narrower scope in virtue of being more vulnerable to refutation. Bayesian principles suggest that simpler theories yielding the same predictions as more complex ones derive stronger support from common favourable evidence: On this view, a single general theory may be better confirmed than several theories of narrower scope that are equally consistent with the available data.

Theory unification has provided the basis for influential accounts of explanation. According to many authors, explanation is largely a matter of unifying seemingly independent instances under a generalization. As the explanation of individual physical occurrences is achieved by bringing them within th scope of a scientific theory, so the explanation of individual theories is achieved by deriving them from a theory of a wider domain. On this view, T1, T2, . . . Tn, are explained by being unified into T*.

The question of what theory unification reveals about the world arises in the debate between scientific realism and instrumentals. According to scientific realists, the unification of theories reveals common causes or mechanisms underlying apparently unconnected phenomena. The comparative case with which scientists interpretation, realists maintain, but can be explained if there exists a substrate underlying all phenomena composed of real observable and unobservable entities. Instrumentalists provide a mythological account of theory unification which rejects these ontological claims of realism and instrumentals.

Arguments in a like manner, are of statements which purported provides support for another. The statements which purportedly provide the support are the premises while the statement purportedly supported is the conclusion. Arguments are typically divided into two categories depending on the degree of support they purportedly provide. Deduction arguments purportedly provide conclusive arguments purportedly provide any probable support. Some, but not all, arguments succeed in supporting arguments, successful in providing support for their conclusions. Successful deductive arguments are valid while successful inductive arguments are strong. An argument is valid just in case if all its ptr=muses are true then its conclusion must be true. An argument is strong just in case if all its premises are true its conclusion is only probable. Deductive logic provides methods for ascertaining whether or not an argument is valid whereas inductive logic provides methods for ascertaining the degree of support the premiss of an argument confer on its conclusion.

The argument from analogy is intended to establish our right to believe in the existence and nature of ‘other minds’, it admits that it is possible that the objects we call persona are, other than themselves, mindless automata, but claims that we nonetheless have sufficient reason for supposing this are not the case. There is more evidence that they cannot mindless automata than that they are:

The classic statement of the argument comes from J.S. Mill. He wrote:

I am conscious in myself of a series of facts connected by an

uniform sequence, of which the beginning is modification

of my body, the middle, in the case of other human beings, I have

the evidence of my senses for the first and last links of the series, but not for the intermediate link. I find, however, that the sequence

Among the first and last is regular and constant in the other

cases as it is in mine. In my own case I know that the first link produces the last through the intermediate link, and could not produce it without. Experience, therefore, obliges me to conclude that there must

be an intermediate link, which must either be the same in others

as in myself, or a different one, . . . by supposing the link to be of the Same nature . . . I confirm to the legitimate rules of experimental enquiry.

As an inductive argument this is very weak, because it is condemned to arguing from a single case. But to this we might reply that nonetheless, we have more evidence that there is other minds than that there is not.

The real criticism of the argument is due to the Austrian philosopher Ludwig Wittgenstein (1889-1951). It is that the argument assumes that we at least understand the claims that there are subjects of experience other than themselves, who enjoy experiences which are like ours but not ours: It only asks what reason we have to suppose that claim true. But if the argument does indeed express the ground of our right to believe in the existence of others. It is impossible to explain how we are able to achieve that understanding. So if there is a place for argument from analogy, the problem of other minds - the real, hard problem, which is how we acquire a conception of another mind - is insoluble. The argument is either redundant or worse.

Even so, the expression ‘the private language argument’ is sometimes used broadly to refer to a battery of arguments in Wittgenstein’s ‘Philosophical Investigations’, which are concerned with the concepts of, and relations between, the mental and its behavioural manifestations (the inner and the outer), self-knowledge and knowledge of other’s mental states. Avowals of experience and description of experiences. It is sometimes used narrowly to refer to a single chain of argument in which Wittgenstein demonstrates the incoherence of the idea that sensation names and names of experiences given meaning by association with a mental ‘object’, e.g., the word ‘pain’ by association with the sensation of pain, or by mental (private) ‘ostensive definition’. In which a mental ‘entity’ supposedly functions as a sample, e.g., a mental image, stored in memory y, is conceived as providing a paradigms for the application of the name.

A ‘private language’ is not a private code, which could be cracked by another person, nor a language spoken by only one person, which could be taught to others, but a putative language, the individual words of which refer to what can (apparently) are known only by the speaker, i.e., to his immediate private sensations or, to use empiricist jargon, to the ‘ideas’ in his mind. It has been a presupposition of the mainstream of modern philosophy, empiricist, rationalist and Kantian alike, of representationalism that the languages we speak are such private languages, that the foundations of language no less than the foundations of knowledge lie in private experience. To determine this picture with all its complex ramifications is the purpose of Wittgenstein’s private arguments.

There are various ways of distinguishing types of foundationalist epistemology, whereby Plantinga (1983) has put forward an influential conception of ‘classical foundationalism’, specified in terms of limitations on the foundations. He construes this as a disjunction of ancient and medieval foundationalism’, which takes foundations to comprise what is self-evident and ‘evident to the senses’ and ‘modern foundationalism’, that replaces ‘evidently to the senses’ with ‘incorrible’, which in practice what taken to apply to beliefs about one’s present states of consciousness. Plantinga himself developed this notion in the context of arguing that items outside this territory, in particular certain beliefs about God, could also be immediately justified. A popular recent distinction is between what is variously called ‘strong’ or ‘extreme’ foundationalism and ‘moderate’ or ‘minimal’ foundationalism, with the distinction depending on whether various epistemic immunities are required of foundations. Finally, ‘simple’ and ‘iterative’ foundationalism are dependent on whether it is required of as foundations only that it is immediately justified, or whether it is also required that the higher level belief that the former belief is immediately justified is itself immediately justified.

However, classic opposition is between foundationalism and coherentism. Coherentism denies any immediate justification. It deals with the regress argument by rejecting ‘linear’ chains of justification and, in effect, taking the total system of belief to be epistemically primary. A particular belief is justified to the extent that it is integrated into a coherent system of belief. More recently, ‘pragmatists’ like American educator, social reformer and philosopher of pragmatism John Dewey (1859-1952), have developed a position known as contextualism, which avoids ascribing any overall structure to knowledge. Questions concerning justification can only arise in particular context, defined in terms of assumptions that are simply taken for granted, though they can be questioned in other contexts, where other assumptions will be privileged.

Meanwhile, it is, nonetheless, the idea that the language each of us speaks is essentially private, that leaning a language is a matter of associating words with, or ostensibly defining words by reference to, subjective experience (the ‘given’), and that communication is a matter of stimulating a pattern of associations in the mind of the hearer qualitatively identical with what in the mind of the speaker is linked with multiple mutually supporting misconceptions about language, experiences and their identity, the mental and its relation to behaviour, self-knowledge and knowledge of the states of minds of others.

1. The idea that there can be such a thing as a private language is one manifestation of a tactic committed to what Wittgenstein called ‘Augustine’s picture of language’ - pre-theoretical picture according to which the essential function of words is to name items in reality, that the link between word and world is affected by ‘ostensive definition’, and describe a state of affairs. Applied to the mental, this knows that what a psychological predicate such as ‘pain’ means if one knows, is acquainted with, what it stands for - a sensation one has. The word ‘pain’ is linked to the sensation it names by way of private ostensive definition, which is affected by concentration (the subjective analogue of pointing) on the sensation and undertaking to use the word of that sensation. First-person present tense psychological utterances, such as ‘I have a pain’ are conceived to be descriptions which the speaker, as it was, reads off the facts which are private accessibility to him.

2. Experiences are conceived to be privately owned and inalienable - no on else can have my pain, but not numerically, identical with mine. They are also thought to be epistemically private - only I really know that what I have is a pain, others can at best only believe or surmise that I am in pain.

3. Avowals of experience are expressions of self-knowledge. When I have an experience, e.g., a pain, I am conscious or aware that I have by introspection (conceived as a faculty of inner sense). Consequently, I have direct or immediate knowledge of my subjective experience. Since no one else can have what I have, or peer into my mind, my access is privileged. I know, and an certain, that I have a certain experience whenever I have it, for I cannot doubt that this, which I now have, in a pain.

4. One cannot gain introspective access to the experience of others, so one can obtain only indirect knowledge or belief about them. They are hidden behind the observable, behaviour, inaccessible to direct observation, and inferred either analogically. Whereby, this argument is intended to establish our right to believe in the existence and nature of other minds. It admits it is possible that the objects we call persons are, other than ourselves, mindless automata, but claims that we nonetheless, have sufficient reason for supposing this not to be the case. There is more evidence that they are not mindless automata than they are.

The real criticism of the argument is du e to Wittgenstein (1953). It is that the argument assumes that we at least understand the claims that there are subjects of experience other than ourselves, who enjoy experiences which are like ours but not ours: It only asks what reason we have to suppose that claim true. But if the argument does indeed express the ground of our right to believe in the existence of others, it is impossible to explain how we are able to achieve that understanding. So if there is a place for argument from analogy, the problem of other minds - the real, hard problem, which is how we acquire a conception of another mind - is insoluble. The argument is either redundant or worse.

Even so, the inference to the best explanation is claimed by many to be a legitimate form of non-deductive reasoning, which provides an important alternative to both deduction and enumerative induction. Indeed, some would claim that it is only through reasoning to the best explanation that one can justify beliefs about the external world, the past, theoretical entities in science, and even the future. Consider belief about the external world and assume that we know what we do about the external world through our knowledge of the subjective and fleeting sensations. It seems obvious that we cannot deduce any truths about the existence of physical objects from truths describing the character of our sensations. But either can we observe a correlation between sensations and something other than sensations since by hypothesis all we ever nave to rely on ultimately is knowledge of our sensations. Nevertheless, we may be able to posit physical objects as the best explanation for the character and order of our sensations. In the same way, various hypotheses about the past, might best be explained by present memory: Theoretical postulates in physics might best explain phenomena in the macro-world. And it is even possible that our access to the future to explain past observations. But what exactly is the form of an inference to the best explanation? However, if we are to distinguish between legitimate and illegitimate reasoning to the best explanation it would seem that we need a more sophisticated model of the argument form. It would seem that in reasoning to an explanation we need ‘criteria’ for choosing between alternative explanation. If reasoning to the best explanation is to constitute a genuine alterative to inductive reasoning, it is important that these criteria not be implicit premises which will convert our argument into an inductive argument

However, in evaluating the claim that inference to best explanation constitutes a legitimate and independent argument form, one must explore the question of whether it is a contingent fact that at least most phenomena have explanations and that explanations that satisfy a given criterion, simplicity, for example, is more likely to be correct and writers of texts, if the universe structure in such a way that simply, powerful, familiar explanations were usually the correct explanation. It is difficult to avoid the conclusion that this is true, but It would be an empirical fact about our universe discovered only a posterior. If the reasoning to the best explanation relies on such criteria, it seems that one cannot without circularity use reasoning to the best explanation to discover that the reliance on such criteria is safe. But if one has some independent was of discovering that simple, powerful, familiar explanations are more often correct, then why should we think that reasoning of the best explanation is an independent source of information about the world? Indeed, why should we not conclude that it would be more perspicuous to represent the reasoning this way. That is, simply an instance of familiar inductive reasoning.

5. The observable behaviour from which we thus infer consists of bare bodily movements caused by inner mental events. The outer (behaviour) are not logically connected with the inner (the mental). Hence, the mental are essentially private, known ‘strictu sensu’, only to its owner, and the private and subjective is better known than the public.

The resultant picture leads first to scepticism then, ineluctably to ‘solipsism’. Since pretence and deceit are always logically possible, one can never be sure whether another person is really having the experience behaviourally appears to be having. But worse, if a given psychological predicate means ‘this’ (which I have no one else could logically have - since experience is inalienable), then any other subjects of experience. Similar scepticism about defining samples of the primitive terms of a language is private, then I cannot be sure that what you mean by ‘red’ or ‘pain’ is not quantitatively identical with what I mean by ‘green’ or ‘pleasure’. And nothing can stop us frm concluding that all languages are private and strictly mutually unintelligible.

Philosophers had always been aware of the problematic nature of knowledge of other minds and of mutual intelligibly of speech of their favour red picture. It is a manifestation of Wittgenstein’s genius to have launched his attack at the point which seemed incontestable - namely, not whether I can know of the experiences of others, but whether I can understand the ‘private language’ of another in attempted communication, but whether I can understand my own allegedly private language.

The functionalist thinks of ‘mental states’ and events as causally mediating between a subject’s sensory inputs and that subject’s ensuing behaviour that what makes a mental state the doctrine that what makes a mental state the type of state it is - a pain, a smell of violets, a belief that koalas are dangerous - is the functional relation it bears to the subject’s perceptual stimuli it beards to the subject’s perceptual stimuli, behavioural responses and other mental states. That’s not to say, that, functionalism is one of the great ‘isms’ that have been offered as solutions to the mind/body problem. The cluster of questions that all of these ‘isms’ promise to answer can be expressed as: What is the ultimate nature of the mental? At the most overall level, what makes a mental state mental? At the more specific level that has been the focus in recent years: What do thoughts have in common in virtue of which they are thoughts? That is, what makes a thought a thought? What makes a pain a pain? Cartesian Dualism said the ultimate nature of the mental of the mental was said the ultimate nature of the mental was to be found in a special mental substance. Behaviouralism identified mental states with behavioural disposition: Physicalism in its most influential version identifies mental states with brain states. Of course, the relevant physical state s are various sorts of neutral states. Our concepts of mental states such as thinking, and feeling are of course different from our concepts of neural states, of whatever.

Disaffected by Cartesian dualism and from the ‘first-person’ perspective of introspective psychology, the behaviouralists had claimed that there is nothing to the mind but the subject’s behaviour and disposition to behave equally well against the behavioural betrayal, behaving just as pain-free human beings, would be the right sort of case. For example, for Rudolf to be in pain is for Rudolf to be either behaving in a wincing-groaning-and-favouring way or disposed to do so (in that not keeping him from doing so): It is nothing about Rudolf’s putative inner life or any episode taking place within him.

Though behaviourism avoided a number of nasty objects to dualism (notably Descartes’ admitted problem of mind-body interaction), some theorists were uneasy, they felt that it its total repudiation of the inner, behaviourism was leaving out something real and important. U.T. Place spoke of an ‘intractable residue’ of conscious mental items that bear no clear relations to behaviour of any particular sort. And it seems perfectly possible for two people to differ psychologically despite total similarity of their actual and counter-factual behaviour, as in a Lockean case of ‘inverted spectrum’: For that matter, a creature might exhibit all the appropriate stimulus-response relations and lack mentation entirely.

For such reasons, Place and the Cambridge-born Australian philosopher J.J.C. Smart proposed a middle way, the ‘identity theory’, which allowed that at least some mental states and events are genuinely inner and genuinely episodic after all: They are not to be identified with outward behaviour or even with hypothetical disposition to behave. But, contrary to dualism, the episodic mental items are not ghostly or non-physical either. Rather, they are neurophysiological of an experience that seems to resist ‘reduction’ in terms of behaviour. Although ‘pain’ obviously has behavioural consequences, being unpleasant, disruptive and sometimes overwhelming, there is also something more than behaviour, something ‘that it is like’ to be in pain, and there is all the difference in the world between pain behaviour accompanied by pain and the same behaviour without pain. Theories identifying pain with neural events subserving it have been attacked, e.g., Kripke, on the grounds that while a genuine metaphysical identity y should be necessarily true, the association between pain and any such events would be contingent.

Nonetheless, the American philosopher’s Hilary Putnam (1926-) and American philosopher of mind Alan Jerry Fodor (1935-), pointed out a presumptuous implication of the identity theory understood as a theory of types or kinds of mental items: That a mental type such s pain has always and everywhere the neurophysiological characterization initially assigned to it. For example, if the identity theorist identified pain itself with the firing of c-fibres, it followed that a creature of any species (earthly or science-fiction) could be in pain only if that creature had c-fibres and they were firing. However, such a constraint on the biology of any being capable of feeling pain is both gratuitous and indefensible: Why should we suppose that any organism must be made of the same chemical materials as us in order to have what can be accurately recognized pain? The identity theorists had overreacted to the behaviourists’ difficulties and focussed too narrowly on the specifics of biological humans’ actual inner states, and in doing so, they had fallen into species chauvinism.

Fodor and Putnam advocated the obvious correction: What was important, were no t being c-fibres (per se) that were firing, but what the c-fibres was doing, what their firing contributed to the operation of the organism as a whole? The role of the c-fibres could have been preformed by any mechanically suitable component s long as that role was performed, the psychological containment for which the organism would have been unaffected. Thus, to be in pain is not per se, to have c-fibres that are firing, but merely to be in some state or other, of whatever biochemical description that play the same functional role as did that plays the same in the human beings the firing of c-fibres in the human being. we may continue to maintain that pain ‘tokens’, individual instances of pain occurring in particular subjects at particular neurophysiological states of these subjects at those times, throughout which the states that happed to be playing the appropriate roles: This is the thesis of ‘token identity’ or ‘token physicalism’. But pan itself (the kind, universal or type) can be identified only with something mor e abstract: th e caudal or functional role that c-fibres share with their potential replacements or surrogates. Mental state - and identified not with neurophysiological types but with more abstract functional roles, as specified by ‘stare-tokens’ relations to the organism’s inputs, outputs and other psychological states.

Functionalism has in itself the distinct souses for which Putnam and Fodor saw mental states in terms of an empirical computational theory of the mind, also, Smart’s ‘topic neutral’ analyses led Armstrong and Lewis to a functional analysis of mental concepts. While Wittgenstein’s idea of meaning as use led to a version of functionalism as a theory of meaning, further developed by Wilfrid Sellars (1912-89) and later Harman.

One motivation behind functionalism can be appreciated by attention to artefact concepts like ‘carburettor’ and biological concepts like ‘kidney’. What it is for something to be a carburettor is for it to mix fuel and air in an internal combustion engine, and carburettor is a functional concept. In the case of ‘kidney’, the scientific concept is functional - defined in terms of a role in filtering the blood and maintaining certain chemical balances.

The kind of function relevant to the mind can be introduced through the parity-detecting automaton, wherefore according to functionalism, all there is to being in pain is having to say ‘ouch’, wonder whether you are ill, and so forth. Because mental states in this regard, entail for its method for defining automaton states is supposed to work for mental states as well. Mental states can be totally characterized in terms that involve only logico-mathematical language and terms for input signals and behavioural outputs. Thus, functionalism satisfied one of the desiderata of behaviourism, characterized the mental in entirely non-mental language.

Suppose we have a theory of mental states that specify all the causal relations among the stats, sensory inputs and behavioural outputs. Focussing on pain as a sample, mental state, it might say, among other things, that sitting on a tack causes pain an that pain causes anxiety and saying ‘ouch’. Agreeing for the sake of the example, to go along with this moronic theory, functionalism would then say that could define ‘pain’ as follows: Bing in pain - being in the first of two states, the first of which is causes by sitting on tacks, and which in turn cases the other state and emitting ‘ouch’. More symbolically:

Being in pain = Being an x such that ∃

P ∃ Q[sitting on a tack cause s P and P

causes both Q and emitting ‘ouch; and

x is in P]

More generally, if T is a psychological theory with ‘n’ mental terms of which the seventeenth is ‘pain’, we can define ‘pain’ relative to T as follows (the ‘F1' . . . ‘Fn’ are variables that replace the ‘n’ mental terms):

Being in pain = Being an x such that ∃

F1 . . . Fn[T(F1 . . . Fn) & x is in F17]

The existentially quantified part of the right-hand side before the ‘&’ is the Ramsey sentence of the theory ‘T’. In this ay, functionalism characterizes the mental in non-mental terms, in terms that involve quantification over realization of mental states but no explicit mention of them: Thus, functionalism characterizes the mental in terms of structures that are tacked down to reality only at the inputs and outputs.

The psychological theory ‘T’ just mentioned can be either an empirical psychological theory or else a common-sense ‘folk’ theory, and the resulting functionalisms are very different. In the former case, which is named ‘psychofunctionalism’. The functional definitions are supposed to fix the extensions of mental terms. In the latter case, conceptual functionalism, the functional definitions are aimed at capturing our ordinary mental concepts. (This distinction shows an ambiguity in the original question of what the ultimate nature of the mental is.) The idea of psychofunctionalism is that the scientific nature of the mental consists not in anything biological, but in something ‘organizational’, analogous to computational structure. Conceptual functionalism, by contrast, can be thought of as a development of logical behaviouralism. Logical behaviouralisms thought that pain was a disposition to pan behaviour. But as the Polemical British Catholic logician and moral philosopher Thomas Peter Geach (1916-) and the influential American philosopher and teacher Milton Roderick Chisholm (1916-99) pointed out, what counts as pain behaviour depends on the agent’s belief and desires. Conceptual functionalism avoid this problem by defining each mental state in terms of its contribution to dispositions to behave - and have other mental states.

The functional characterization is given to assume a psychological theory with a finite number of mental state terms. In the case of monadic states like pain, the sensation of red, and so forth. It does seem a theoretical option to simply list the states and the=ir relations to other states, inputs and outputs. But for a number of reasons, this is not a sensible theoretical option for belief-states, desire-states, and other propositional-attitude states. For on thing, the list would be too long to be represented without combinational methods. Indeed, there is arguably no upper bound on the number of propositions anyone which could in principle be an object of thought. For another thing, there are systematic relations among belies: For example, the belief that ‘John loves Mary’. Ann the belief that ‘Mary loves John’. These belief-states represent the same objects as related to each other in converse ways. But a theory of the nature of beliefs can hardly just leave out such an important feature of them. we cannot treat ‘believes-that-grass-is-green’, ‘believes-that-grass-is-green], and so forth, as unrelated’, as unrelated primitive predicates. So we will need a more sophisticated theory, one that involves some sort of combinatorial apparatus. The most promising candidate are those that treat belief as a relation. But a relation to what? There are two distinct issues at hand. One issue is how to formulate the functional theory, for which our acquiring of knowledge-that acquires knowledge-how, abilities to imagine and recognize, however, the knowledge acquired can appear in embedded as contextually represented. For example, reason commits that if this is what it is like to see red, then this similarity of what it is like to see orange, least of mention, that knowledge has the same problem as to infer that non-cognitive analysis of ethical language have in explaining the logical behaviour of ethical predicates. For a suggestion in terms of a correspondence between the logical relations between sentences and the inferential relations among mental states. A second issue is that types of states could possibly realize the relational propositional attitude states. Fodor (1987) has stressed the systematicity of propositional attitudes and further points out that the beliefs whose contents are systematically related exhibit th e following sort of empirical relation: If one is capable of believing that Mary loves John, one is also capable of believing that John love Mary. Jerry Fodor argues that only a language of thought in the brain could explain this fact.

Jerry Alan Fodor (1935-), an American philosopher of mind who is well known for a resolute realism about the nature of mental functioning. Taking the analogy between thought and computation seditiously. Fodor believes that mental representations should be conceived as individual states with their own identities and structure, like formulae transformed by processes of computation or those of the ‘Holist’ such as Donald Herbert Davidson (1917-2003) or, ‘instrumentalists about mental ascriptions, such as Daniel Clement Dennett (1952). In recent years he has become a vocal critic of some of the aspirations of cognitive science, literaturizing such books as ‘Language of Thought’ (1975, ‘The Modularity of Mind (1983), ‘Psychosemantics (1987), ‘The Elm and the Expert(1994), ‘Concepts: Where Cognitive Science went Wrong’ (1998), and ‘Hume Variations ‘(2003).

Purposively, ‘Folk psychology’ is primarily ‘intentional explanation’: It’s the idea that people’s behaviour can be explained b yy reference to the contents of their beliefs and desires. Correspondingly, the method-logical issue is whether intentional explanation can be co-opted to make science out of. Similar questions might be asked about the scientific potential of other folk-psychological concepts (consciousness for example), but, what make s intentional explanation problematic is that they presuppose that there are intentional states. What makes intentional states problematic is that they exhibit a pair of properties assembled in the concept of ‘intentionality’, in its current use the expression ‘intentionality refers to that property of the mind by which it is directed at, about, or of objects and stat es of affairs in the world. Intentionality, so defined, includes such mental phenomena as belief, desire, intention, hope, fear, memory, hate, lust, disgust, and memory as well as perception and intentional action, however, there is in remaining that of:

(1) Intentional states have causal powers. Thoughts (more precisely, having of thoughts) make things happen: Typically, thoughts make behaviour happen. Self-pit y can make one weep, as can onions.

(2) Intentional states are semantically evaluable, beliefs, for example, area about how things are and are therefore true or false depending on whether things are the way that they are believed to be. Consider, by contrast, tables, chairs, onions, and the cat’s being on the mat. Though they all have causal powers they are not about anything and are therefore not evaluable as true or false.

If there is to be an intentional science, there must be semantically evaluable things that have causal powers. Moreover, there must be laws about such things, including, in particular, laws that relate beliefs and desires to one another and to actions. If there are no intentional laws, then there is no intentional science. Perhaps, scientific explanation is not always explanation by law subsumption, but surely if often is, and there is no obvious reason why an intentional science should be exceptional in this respect. Moreover, one of the best reasons for supposing that common sense is right about there being intentional states is precisely that there seem to be many reliable intentional generalizations for such states to fall under. It is for us to assume that many of the truisms of folk psychology either articulate intentional laws or come pretty close doing so.



So, for example, it is a truism of folk psychology that rote repetition facilitates recall. (Moreover, and most generally, repetition improves performance ‘How do you get to Carnegie Hall’?) This generalization relates the content to what you learn to the content of what you say to yourself while you are learning it: So, what it expresses, is, ‘prima facie’, a lawful causal relation between types of intentional states. Real psychology y has lots more to say on this topic, but it is, nonetheless, much more of the same. To a first approximation, repetition does causally facilitate recall, and that it does is lawful.

There are, to put it mildly, many other case of such reliable intentional causal generalizations. There are also many, many kinds of folk psychological generalizations about ‘correlations’ among intentional states, and these to are plausible candidates for flushing out as intentional laws. For example that anyone who knows what 7 + 5 is also to know what 7+ 6 is: That anyone who knows what ‘John love’s Mary’ means who knows what ‘Mary love’s John’ means, and so forth.

Philosophical opinion about folk psychological intentional generalizations runs the gamut from ‘there are not any that are really reliable’ to. They are all platitudinously true, hence not empirical at all. Nevertheless, suffice to say, that the necessity of ‘if 7 +5 = 12 then 7 + 6 =13' is quite compatible with the ‘contingency’ of ‘if someone knows that 7 + 5 = 12, then he knows that 7 + 6 =13: And, then, part of the question ‘how can there be an intentional science’ is ‘how can there be an intentional practice of law’?

Let us assume most generally, that laws support counterfactuals and are confirmed by their instances. Further, to assume that every law is either basic or not. Basic laws are either exceptionless or intractably statistical. The only basic laws are laws of basic physics.

All Non-basic laws, including the laws of all the Non-basic sciences, including, in particular, the intentional laws of psychology, are ‘c[eteris] p[aribus] laws: They hold only ‘all else being equal’. There is - anyhow. There ought to be that a whole department of the philosophy of science devoted to the construal of cp laws: To making clear, for instances, how they can be explanatory, how they can support counterfactuals, how they can subsume the singular causal truths that instance them . . . and so forth. Omitting only these issues in what gives presence to the future, is, because they do not belong to philosophical psychology as such. If the laws of intentional psychology is a special, I, e., Non-basic science. Not because it is an intentional science.

There is a further quite general property that distinguishes cp laws from basic ones: Non-basic laws want mechanisms for their implementation. Suppose, for a working example, that some special science states that being ‘F’ causes xs to be ‘G’. (Being irradiated by sunlight causes plants to photo-synthesize, as for being freely suspended near the earth’s surface causes bodies to fall with uniform accelerating, and so on.) Then it is a constraint on this generalization’s being lawful that ‘How does, being ‘F’ cause’s xs to be ‘G’? There must be an answer, this is, however, if we are continued to suppose that one of the ways special science; laws are different from basic laws. A basic law says that ‘F’s causes (or are), if there were, perhaps that aby explaining how, or why, or by what means F’s cause G’s, the law would have not been basic but derived.

Typically - though variably - the mechanism that implements a special science law is defined over the micro-structure of the thing that satisfy the law. The answer to ‘how does. Sunlight make plants photo-synthesize’? Its function implicates the chemical structure of plants: The answer to ‘how does freezing make water solid’? This question surely implicates the molecular structure of waters’ foundational elements, and so forth. In consequence, theories about how a law is implemented usually draw on or upon the vocabularies of two, or more levels of explanation.

If you are specially interested in the peculiarities of aggregates of matter at the Lth level (in plants, or minds, or mountains, as it might be) then you are likely to be specially interested in implementing mechanisms at the L - 1th level (the ‘immediately’ mechanisms): This is because the characteristics of L-level laws can often be explained by the characteristics of their L - 1th level implementations. You can learn a lot about plants qua plants by studying their chemical composition. You learn correspondingly less by studying their subatomic constituents, though, no doubt, laws about plants are implemented, eventually, sub-atomically. The question thus arises of what mechanisms might immediately implement the intentional laws of psychology with that accounting for their characteristic features.

Intentional laws subsume causal interactions among mental processes, that much is truistic. But, in this context, something substantive, something that a theory of the implementation of intentional laws will account for. The causal processes that intentional states enter into have a tendency to preserve their semantic properties. For example, thinking true thoughts are so, that an inclining inclination to casse one to think more thoughts that are also true. This is not small matter: The very rationality of thought depends on such fact, in that ewe can consider or place them for interpretations as that true thoughts that ((P ➞ Q) and (P)) makes receptive to cause true thought that ‘Q’.

A good deal has happened in psychology - notably since the Viennese founder of psychoanalysis, Sigmund Freud (1856-1939) - has consisted of finding new and surprising cases where mental processes are semantically coherent under intentional characterizations. Freud made his reputation by showing that this was true even much of the detritus of behaviours, dreams, verbal slips and the like, even to free or word association and ink-blob coloured identification cards (the Rorschach test). Even so, it turns out that psychology of normal mental processes is largely a grist for the same normative intention. For example, it turns out to be theoretically revealing to construe perceptual [processes as inferences that take specifications of proximal stimulations as premises and yield specifications, and that are reliably truth preserving in ecologically normal circumstances. The psychology of learning cries out for analogous treatment, e.g., for treatment as a process of hypothesis formation and ratifying confirmation.

Intentional states, as or common-sense understands them, have both causal and semantic properties and that the combination appears to be unprecedented: Propositions are semantically evaluable, but they are abstract objects and have no casual powers. Onions are concrete particulars and have casual powers, however, they are not semantically evaluable. Intentional states seem to be unique in combining the two that is what so many philosophers have against them.

Suppose, once, again, that ‘the cat is on the mat’. On the one hand, the thing as stated about the cat on the mat, is a concrete particular in good standing and it has, qua material object, an open-ended galaxy of causal powers. (It reflects light in ways that are essential to its legibility; It exerts a small but in particular detectable gravitational effect upon the moon, and whatever. On the other hand, what stands concrete is about something and is therefore semantically evaluable: It’s true if and only if there is a cat where it says that there is. So, then, the inscription of ‘the cat is on the mat,’ has both content and causal powers, and so does my thought that the cat is on the mat.

At this point, we are asked of how many words are there in the sentence. ‘The cat is on the mat’? There are, of course, at least two answers to this question, precisely because one can either count word types, of which there are five, or individual occurrences - known as tokens - of which there are six. Moreover, depending on how one chooses to think of word types, another answer is possible. Since the sentence contains definite articles, noun, a proposition and a verb, there are four grammatically different types of word in the sentence.

The type/token distinction, understood as a distinction between sorts of thing and instances, is commonly applied to mental phenomena. For example, one can think of pain in the type way as when we say that we have experienced burning pain many times: Or, in the token way, as when we speak of the burning pain currently being suffered. The type/token distinction for mental states and events becomes important in the context of attempts to describe the relationship between mental and physical phenomena. In particular, the identity theory asserts that mental states are physical states, and this raises the question whether the identity in question is of types or tokens.

Appreciably, if mental states are identical with physical states, presumably the relevant physical states are various sorts of neural state. Our concept of mental states such as thinking, sensing, and feeling and, and, of course, are different from our concepts of neural states, of whatever sort. Still, that is no problem for the identity theory. As J.J. Smart (1962) who first argued for the identity theory, and, emphasizes the requisite identity does not depend on our concepts of mental states or the meaning of mental terminology. For ‘a’ to be identical with ‘b’, both ‘a’ and ‘b’ must have exactly the same properties, however, the terms ‘a’ and ‘b’ need not mean the same. The principle of the indiscernibility of identical states that if ‘a’ is identical with ‘b’. Then every property that ‘a’ has ‘b’ has, and vice versa. This is sometimes known as Leibniz’s law.

However, the problem does seem to arise about the properties of mental states. Suppose pain is identical with a certain firing of c-fibres. Although a particular pain is the very same state as a neural firing, we identify that state in two different ways: As a pain and as a neural firing. The state will therefore have certain properties in virtue of which we identify it as neural firing, the properties in virtue of which we identify it as a pain will be mental properties, whereas those in virtue of which we identify it as a neural firing will be physical properties. This has seemed to many to lead to a kind of duality, at which the level of the properties of mental states. Even so, if we reject a dualism of substances and take people simply to be physical organisms, those organisms still have both mental and physical states.

The problem just sketched about mental properties is widely thought to be most pressing for sensations, since the painful quality of pains and the red quality of visualization in sensations that seem to be irretrievably non-physical. So even if mental states are all identicals with physical states, these states appear to have properties that are not physical. And if mental states do actually have non-physical properties, the identity of mental with physical states would not sustain the thoroughgoing mind-body materialism.

A more sophisticated reply to the difficultly about mental properties is due independently to the forth-right Australian ‘materialist and together with J.J.C. Smart, the leading Australian philosophers of the second half of the twentieth century. D.M. Armstrong (1926-) and the American philosopher David Lewis (1941-2002), who argue that for a state to be a particular sort of intentional state or sensation is for that state to bear characteristic causal relations to other particular occurrences. The properties in virtue of which we identify states as thoughts or sensations will still be neutral as between being mental or physical, since anything can bear a causal relations to anything else. But causal connections have a better chance than simplify in some unspecified respect of capturing the distinguishing properties of sensations and thoughts.

Early identity theorists insisted that the identity between mental and bodily events was contingent, meaning simply that the relevant identity statements were not conceptual truths. That leaves open the question of whether such identities would be necessarily true on other construals of necessity.

American logician and philosopher, Saul Aaron Kripke (1940-) made his early reputation as a logical prodigy, especially through the work on the completeness of systems of modal logic. The three classic papers are ‘A Completeness Theorem in Modal Logic’ (1959, ‘Journal of Symbolic Logic’) ‘Semantical Analysis of Modal Logic’ (1963, Zeltschrift fur Mathematische Logik und Grundlagen der Mathematik) and ‘Semantical Considerations on Modal Logic (1963, Acta Philosohica Fennica). In Naming and Necessity’ (1980), Kripke gave the classic modern treatment of the topic of reference, both clarifying the distinction between names and ‘definite descriptions, and opening the door to many subsequent attempts to understand the notion of reference in terms of a causal link between the use of a term and an original episode of attaching a name to a subject. His Wittgenstein on Rules and Private Language (1983) also proved seminal, putting the rule-following considerations at the centre of Wittgenstein studies, and arguing that the private language argument is an application of them. Kripke has also written influential work on the theory of truth and the solution of the ‘semantic paradoxes’.

Nonetheless, Kripke (1980) has argued that such identities would have to be necessarily true if they were true at all. Some terms refer to things contingently, in that those terms would have referred to different things had circumstances been relevantly different. Kripke’s example is ‘The first Post-master General of the us of A, which, in a different situation, would have referred to somebody other than Benjamin Franklin. Kripke calls these terms non-rigid designators. Other terms refer to things necessarily, since no circumstances are possible in which they would refer to anything else, these terms are rigid designators.

If the term ‘a’ and ‘b’ refer to the same thing and both determine that thing necessarily, the identity statement ‘a = b’ is necessarily true. Kripke maintains that the term ‘pain’ and the term for the various brain states all determine the states they refer to necessarily: No circumstances are possible in which these terms would refer to different things. So, if pain were identical d with some particular brain state. But be necessarily identical with that state. Yet, Kripke argues that pain cannot be necessarily identical with any brain state, since the tie between pains and brain states plainly seems contingent. He concludes that they cannot be identical at all.

Kripke notes that our intuition about whether an identity is contingent can mislead us. Heat is necessarily identical with mean molecular kinetic energy: No circumstances are possible in which they are not identical. Still, it may at first sight appear that heat could have been identical with some other phenomena, but it appears that this way, Kripke argues only because we pick out heat by our sensation of heat, which bears only a contingent-bonding to mean molecular kinetic energy. It is the sensation of heat that actually seems to be connected contingently with mean molecular kinetic energy, not with mean molecular kinetic energy, not the physical heat itself.

Kripke insists, however, that such reasoning cannot disarm our intuitive sense that pain is connected only contingently with brain states. This is, because for a state to be pain is necessity for it to be felt as pain, unlike heat, in the case of pain there is no difference between the state itself and how that state is felt, and intuitions about the one are perforce intuitions about the other one are perforce intuitions about the other.

Kripke’s assumption and the term ‘pain’ is open to question. As Lewis notes. One need not hold that ‘pain’ determines the same state in all possible situations indeed, the causal theory explicitly allows that it may not. And if it does not, it may be that pains and brain states are contingently identicals. But there is also a problem about some substantive assumption Kripke makes about the nature of pains, namely, those pains are necessarily felt as pains. First impression notwithstanding, there is reason to think not. There are times when we are not aware of our pains, for example, when we are suitably distracted, so the relationship between pains and our being aware of them may not be contingent after all, just as the relationship between physical heat and our sensation of heat is. And that would disarm the intuitions that pain is connected only contingently with brain states.

Kripke’s argument focuses on pains and other sensations, which, because they have qualitative properties, are frequently held to cause the greater of problems for the identity theory. The American moral and political theorist Thomas Nagel (1937-) traces to general difficulty for the identity theory to the consciousness of mental states. A mental state’s being conscious, he urges, means that there is something it is like to be in that state. And to understand that, we must adopt the point of view of the kind of creature that is in the state. But an account of something is objective, he insists, only insofar as it is independents of any particular type of point of view. Since consciousness is inextricably tied to points of view. No objective account of it is possible. And that means conscious states cannot be identical with bodily states.

The viewpoint of a creature is central to what that creature’s conscious states are like, because different kinds of crenatures have conscious states with different kinds of qualitative property. However, the qualitative properties of a creature’s conscious states depend, in an objective way, on that creature’s perceptual apparatus. we cannot always predict what anther creature’s conscious states are like, just as we cannot always extrapolate from microscopic to macroscopic properties, at least without having a suitable theory that covers those properties. But what a creature’s conscious states like depends in an objective way on its bodily endowment, which is itself objective. So, these considerations give us no reason to think that those conscious states are like is not also an objective matter.

If a sensation is not conscious, there is nothing its like to have it. So Nagel’s idea that what it is like to have sensations is central to their nature suggests that sensations cannot occur without being conscious. And that in turn, seems to threaten their objectivity. If sensations must be conscious. Perhaps they have no nature independently of how we ae aware of them, and thus no objective nature. Nonetheless, only conscious sensations seem to cause problems of the independent theory.

The notion of subjectivity, as Nagel again, see, is the notion of a point of view, what psychologists call a ‘constructionist theory of mind’. Undoubtedly, this notion is clearly tied to the notion of essential subjectivity. This kind of subjectivity is constituted by an awareness of the world’s being experienced differently by different subjects of experience. (It is thus possible to see how the privacy of phenomenal experience might be easily confused with the kind of privacy inherent in a point of view.)

Point-of-view subjectivity seems to take time to develop. The developmental evidence suggests that even toddlers are abl e to understand others as being subjects of experience. For instance, as a very early age, we begin ascribing mental states to other things - generally, to those same things to which we ascribe ‘eating’. And at quite an early age we can say what others would see from where they are standing. We early on demonstrate an understanding that the information available is different from different perceiver. It is in these perceptual senses that we first ascribe the point-of view - subjectivity.

Nonetheless, some experiments seem to show that the point-of-view subjectivity then ascribes to others is limited. A popular, and influential series of experiments by Wimmer and Perner (1983) is usually taken to illustrate these limitations (though there are disagreements about the interpretations, as such.) Two children -Dick and Jane - watch as an experimenter puts a box of candy somewhere, such as in a cookie jar, which is opaque. Jane leaves the room. Dick is asked where Jane will look for the candies, and he correctly answers. ‘In the cookie jar’. The experimenter, in dick’s view, then takes the candy out of the cookie jar and puts it in another opaque place, a drawer, ay. When Dick is asked where to look for the candy, he says quite correctly. ‘In the drawer’. When asked where Jane will look for the candy when she returns. But Dick answers. ‘In the drawer’. Dick ascribes to Jane, not the point-of-view subjectivity she is likely ton have, but the one that fits the facts. Dick is unable to ascribe to Jane belief - his ascription is ‘reality driven - and his inability demonstrates that Dick does not as yet have a fully developed point-of-view subjectivity.

At around the age of four, children in Dick’s position do ascribe the like point-of-view subjectivity to children in Jane’s position (“Jane will look in the cookie jar’): But, even so, a fully developed notion of a point-of-view subjectivity is not yet attained. Suppose that Dick and Jane are shown a dog under a tree, but only Dick is shown the dog’s arriving there by chasing a boy up the tree. If Dick is asked to describe, what Jane, who he knows not to have seen the dog under the tree. Dick will display a more fully developed point-of-view subjectivity only those description will not entail the preliminaries that only he witnessed. It turns out that four-year-olds are restricted by the age’s limitation, however, only when children are six to seven do they succeed.

Yet, even when successful in these cases’ children’s point-of-view subjectivity is reality-driven. Ascribing a point-of-view, subjectivity to others is still in terms relative to information available. Only in our teens do we seem capable of understanding that others can view the world differently from ourselves, even when given access to the same information. Only then do we seem to become aware of the subjectivity of the knowing procedure itself: Interring the ‘facts’ can be coloured by one’s knowing procedure and history. There are no ‘merely’ objective facts.

Thus, there is evidence that we ascribe a more and more subjective point of view to others: from the point-of-view subjectivity we ascribe being completely reality-drive, to the possibility that others have insufficient information, to their having merely different information, and finally, to their understanding the same information differently. This developmental picture seems insufficient familiar to philosophers - and yet well worth our thinking about and critically evaluating.

The following questions all need answering. Does the apparent fact that our point-of-view subjectivity ascribed to others develop over time, becoming more and more of the ‘private’ notions, shed any light on the sort of subjectivity we ascribe to our own self? Do our self-ascriptions of subjectivity themselves become more and more ‘private’, metre and more removed both from the subjectivity of others and from the objective world? If so, what is the philosophical importance of these facts? At the last, this developmental history shows that disentangling our self from the world we live in is a complicate matter.

Based in the fundament of reasonableness, it seems plausibility that we share of our inherented perception of the world, that ‘self-realization as ‘actualized’ of an ‘undivided whole’, drudgingly we march through the corpses to times generations in that we are founded of the last two decades. Here we have been of a period of extraordinary change, especially in psychology. Cognitive psychology, which focuses on higher mental processes like reasoning, decision masking, problem solving, language processing and higher-level visual processing, has become - perhaps - the dominant paradigm among experimental psychologists, while behaviouristically oriented approaches have gradually fallen into disfavour. Largely as a result of this paradigm shift, the level of interaction between the disciplines of philosophy and psychology has increased dramatically.

Nevertheless, developmental psychology was for a time dominated by the ideas of the Swiss psychologist and pioneer of learning theory, Jean Piaget (1896-1980), whose primary concern was a theory of cognitive developments (his own term was ‘genetic epistemology). What is more, like modern-day cognitive psychologists, Piaget was interested in the mental representations and processes that underlie cognitive skills. However, Piaget’s genetic epistemology y never co-existed happily with cognitive psychology, though Piaget’s idea that reasoning is based in an internalized version of predicate calculus has influenced research into adult thinking and reasoning. One reason for the lack of declining side by side interactions between genetic epistemology and cognitive psychology was that, as cognitive psychology began to attain prominence, developmental psychologists were starting to question Piaget’s ideas. Many of his empirical claims about the abilities, or more accurately the inabilities, of children of various ages were discovered to be contaminated by his unorthodox, and in retrospect unsatisfactory, empirical methods. And many of his theoretical ideas were seen to be vague, or uninterpretable, or inconsistent, however.



More than one of the central goals of thee philosophy of science is to provide explicit and systematic accounts of the theories and explanatory strategies s exploited in th e sciences. Another common goal is to construct philosophically illuminating analysis or explanations of central theoretical concepts invoked in one or another science. In the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and there has been a great deal of work on the structure of evolutionary theory on the structure of evolutionary theory and on such crucial concepts as fitness and biological function. The philosophy of physics is another are a in which studies of this sort have been actively pursued. In undertaking this work, philosophers need not (and typically do not) assume that there is anything wrong with the science the y are studying. Their goal simply to provide e accounts of the theories, concepts, and explanatory strategies that scientists are using - accounts th at are more explicit, systematic and philosophically sophisticated that an offered rather rough-and-ready accounts offered by scientists themselves.

Cognitive psychology is in many was a curious and puzzling science. Many of the theorists put forward by cognitive psychologists make use of a family of ‘intentional’ concepts - like believing that ‘p’, desiring that ‘q’, and representing ‘r’ - which do not appear in the physical or biological sciences, and these intentional concepts play a crucial role in many of the explanations offered by these theories.

If a person ‘X’ thinks that ‘p’, desires that ‘p’, believes that ‘p’. Is angry at ‘p’ and so forth, then he or she is described as having a propositional attitude to ‘p?’. The term suggests that these aspects of mental life are well thought of in terms of a relation to a ‘proposition’ and this is not universally agreeing. It suggests that knowing what someone believes, and so on, is a matter of identifying an abstract object of their thought, than understanding his or her orientation towards more worldly objects.

Once, again, the directness or ‘aboutness’ of many, if not all, conscious states have side by side their summing ‘intentionality’. The term was used by the scholastics, but belief thoughts, wishes, dreams, and desires are about things. Equally, we use to express these beliefs and other mental states are about things. The problem of intentionality is that of understanding the relation obtaining between a mental state, or its expression, and the things it is about. A number of peculiarities attend this relation. First, If I am in some relation to a chair, for instance by sitting on it, then both it and I am in some relation to a chair, that is, by sitting on it, then both it and I must exist. But while mostly one thinks about things that exist, sometimes (although this way of putting it has its problems) one has beliefs, hopes, and fears about things that do not, as when the child expects Santa Claus, and the adult fears snakes. Secondly, if I sit on the chair, and the chair is the oldest antique chair in all of Toronto, then I am on the oldest antique chair in the city of Toronto. But if I plan to avoid the mad axeman, and the mad axeman is in fact my friendly postal-carrier. I do not therefore plan to avoid my friendly postal-carrier. The extension of such is the predicate, is the class of objects that is described: The extension of ‘red’ is the class of red things. The intension is the principle under which it picks them out, or in other words the condition a thing must satisfy to be truly described by the predicate. Two predicates (‘ . . . is a rational animal. ‘. . . is a naturally feathered biped) might pick out the same class but they do so by a different condition? If the notions are extended to other items, then the extension of a sentence is its truth-value, and its intension a thought or proposition: And the extension of a singular term is the object referred to by it, if it so refers, and its intension is the concept by means of which the object is picked out. A sentence puts a predicate on other predicate or term with the same extension can be substituted without it being possible that the truth-value changes: If John is a rational animal and we substitute the coexistence ‘is a naturally feathered biped’, then ‘John is a naturally featherless biped’, other context, such as ‘Mary believes that John is a rational animal’, may not allow the substitution, and are called ‘intensional context’.`

What remains of a distinction between the context into which referring expressions can be put. A contest is referentially transparent if any two terms referring to the same thing can be substituted in a ‘salva veritate’, i.e., without altering the truth or falsity of what is aid. A context is referentially opaque when this is not so. Thus, if the number of the planets is nine, then the number of planets is odd, and has the same truth-value as ‘nine is odd’: Whereas, ‘necessarily the number of planets is odd’ or ‘x knows that the number of planets is odd’ need not have the same truth-value as ‘necessarily nine is odd have the same truth-value as ‘necessarily nine in odd’ or ‘x knows that nine is odd’. So while’ . . . in odd’ provides a transparent context, ‘necessarily . . . is odd’ and ‘x knows that . . . is odd’ do not.

Here, in a point, is the view that the terms in which we think of some area are sufficiently infected with error for it be better to abandon them than to continue to try to give coherence theories of their use. Eliminativism should be distinguished from scepticism which claims that we cannot know the truth about some area: Eliminativism claims that there is no truth there to be known, in the terms with which we currently think. An eliminativist about theology simply councils abandoning the terms or discourse of theology, and that will include abandoning worries about the extent of theological knowledge. Eliminativist in the philosophy of mind council abandoning the whole network of terms mind, consciousness’ self, Qualia that usher in the problems of mind and body. Sometimes the argument for doing this is that we should wait for a supposed future e understanding of ourselves, based on cognitive science and better than our current mental descriptions provide, something it is supposed that physicalism shows that no mental description could possibly be true.

It seems, nonetheless, that of a widespread view that either the concept is indispensable, we must either declare seriously that science be that it cannot deal with the central feature of the mind or explain how serious science may include intentionality. One approach in which we communicate fears and beliefs have a two-faced aspect, involving both the object referred to, and the mod e of presentation under which they are thought of. we can see the mind as essentially directed onto existent things, and extensionally relate to them. Intentionality then becomes a feature of language, than a metaphysical or ontological peculiarity of the mental world.

While cognitive psychologists occasionally say a bit about the nature of intentional concepts and the explanations that exploit them, their comments are rarely systematic or philosophically illuminating. Thus, it is hardly surprising that many philosophers have seen cognitive psychology as fertile ground for the sort of careful descriptive work that is done in the philosophy of biology and the philosophy of physics. Jerry Fodor’s ‘Language of Thought’ (1975) was a pioneering study in this genre, one that continues to have a major impact on the field.

The relation between language and thought is philosophy’s chicken-or-egg problem. Language and thought are evidently importantly related, but how exactly are they related? Does language come first and make thought possible or vice versa? Or are they counter-balanced and parallel with each making the other possible?

When the question is stated this of such generality, however, no unqualified answer is possible. In some respect language is prior, in other respects thought is prior. For example, it is arguable that a language is an abstract pairing of expressions and meanings, a function, in the set-theatric sense, in that, this makes sense of the fact that Esperanto is a language no one speaks, and it explains why it is that, while it is a contingent fact that ‘La neige est blanche’ means that snow is white among the French speaking peoples. It is a necessary truth that it means that in French and English are abstract objects in this sense, then they exist whether or not anyone speaks them: They even exist in possible worlds in which there are no thinkers. In this respect, then, language, as well as such notions as meaning and truth in a language, is prior to thought.

But even if languages are construed as abstractive expression-meaning pairing, they are construed what was as abstractions from actual linguistic practice - from the use of language in linguistic communicative behaviour - and there remains a clear sense in which language is dependent on thought. The sequence of marks, ‘Point Peelie is the most southern point of Canada’s geographical boundaries’, means among us that Point Peelie is the most southern lactation that hosts thousands of migrating species. Had our linguistic practice been different, Point Peelie is a home for migrating species and an attraction of hundreds of tourists, that in fact, that the province of Ontario is a home and a legionary resting point for thousands of migrating species, have nothing at all among us. Plainly means that Point Peelie is Canada’s most southern location in bordering between Canada and the Unites State of America. Nonetheless, Point Peelie is special to Canada has something to do with the belief and intentions underlying our use of words and structure that compose the sentence of Canada’s most southern point and yet nearest point in bordering of the United States. More generally, it is a platitude that the semantic features that marks and sounds have a population of tourist and migrating species are at least partly determined by the attitudinal values for which this is the platitude, of course, which says that meaning depends, partially, on the use in communicative behaviours. So, here, is one clear sense in which language is dependent on thought: Thought is required to imbue marks and sounds with the somantic features they have as to host of populations.

The sense in which language does depend on thought can be wedded to the sense in which language does not depend on thought in the following way. we can say, that a sequence of marks or sounds (or, whatever) ‘ς’ means ‘q’ in a language ‘L’, construed as a function from expressions onto meaning, iff L(ς) = q. This notion of meaning-in-a-language, like the notion of a language, is a mere set-theoretic notion that is independent of thought in that it presupposes nothing about the propositional attitude of language users: ‘ς’ can mean ‘q’ in ‘L’ even if ‘L’ has never very been used? But then, we can say that ‘ς’ also means ‘q’ in a population ‘P’. The question of moment then becomes: What relation must a population ‘P’ bear to a language ‘L’ in order for it to be the case that ‘L’ is a language of ‘P’, a language member s of ‘P’ actually speak? In whatever the answer to this question is, this much seems right: In order for a language to be a language of a population of speakers, those speakers must produce sentences of the language in their communicative behaviour. Since such behaviour is intentional, we know that the notion of a language’s being the language of a population of speakers presupposes the notion of thought. And since that notion presupposes the notion of thought, we also know that the same is true of the correct account of the semantic features expression have in populations of speakers.

This is a pretty thin result, not on likely to be disputed, and the difficult question remain. we know that there is some relation ’R’ such that a adaptive ‘L’ is used by a population ‘P’ iff ‘L’ bears ‘R’ to ‘P’. Let us call this reflation, whatever it turns out to be, the ‘actual-language relation’. we know that to explain the semantic features expressions have among those who are apt to produce those expressions, and we know that any account of the relation must require language users to have certain propositional attitudes. But how exactly is the actual language relation to be explained in terms of the propositional attitudes of language users? And what sort of dependence might those propositional attitude in turn have on language or on the semantic factures that are fixed by the actual-language relation? Further, what of the relation of language to thought, before turning to the relation of thought to language.

All must agree that the actual-language relation, and with it the semantic features linguistic items have among speakers, is at least, partly determined by the propositional attitudes of language users. This, however, leaves plenty of room for philosophers to disagree both about the extent of the determination and the nature of the determining propositional attitude. At one end of the determination spectrum, we have those who hold that the actual-language relation is wholly definable in terms on non-semantic propositional attitudes. This position in logical space is most taken as occupied by the programme, sometimes called intention-based semantics, of the English philosopher of language Paul Herbert Grice (1913-1988), introducing the important concept of an ‘implicature’ into the philosophy of language, arguing that not everything that is said is direct evidence for the meaning of some term, since many factors my determine the appropriateness of remarks independently of whether they are actually true. The point, however, undermines excessive attention to the niceties in conversation as reliable indicators of meaning, a methodology characteristic of ‘linguistic philosophy’. In a number of elegant papers which identities is with a complex of sentences which it is uttered. The psychological is thus used to explain the semantic, and the question of whether this is the correct priority has prompted considerable subsequent discussion.

The foundational notion in this enterprise is a certain notion of ‘speaker-semantics’. It is the species of communicative behaviour reported when we say, for example, that in uttering ‘II pleut’. Pierre meant that it was raining, or that in waving her hand, the Queen meant that you were to leave the room. Intention-based semantics seeks to define this notion of speaker meaning wholly in terms of communicators’ audience-directed intentions and without recourse to any semantic notions. Then it seeks to define the actual-language relation in terms of the now-defined notion of speaker meaning, together with certain ancillary notions such as that of a conventional regularity or practice, themselves defined wholly in terms of non-semantic propositional attitudes. The definition in terms of speaker meaning of other agent-semantic notions, such as the notions of an illocutionary act, and this, is part of the intention-based semantics programme.

Some philosophers object to intention-based semantics because they think it precludes a dependence of thought on the communicative use of language. This is a mistake, in that if intention-based semantics definitions are given a strong reductionist reading, as saying that public-language semantic properties (i.e., those semantic properties that supervene on use in communicative behaviour) just are psychological properties, it might still be that one could not have propositional attitudes unless one had mastery of a public-language, insofar as the concept of supervenience has seen increasing service in philosophy of mind. The thesis that the mental is supervenient on the physical - roughly, the claim that the mental character of a thing is wholly determine d by its physical nature - has played a key role in the formulation of some influential positions on the mind-bod y problem. In particular, versions of non-reductive physicalism. Mind-body supervenience has also been invoked in arguments for or against certain specific claims about the mental, and has been used to devise solutions to some central problems about the mind - for example, the problem of mental causation - such that the psychological level of description carries with it a mode of explanation which ‘has no echo in physical theory’.

The ‘content as to infer about mental events, states or processes with content include seeing that the door is shut: Believing you are being followed, and calculating the square root of 2. What centrally distinguishes states, events, or processes - are basic to simply being states - with content is that they involve reference to objects, properties or relations. A mental state with content can fail to refer, but there always exists a specific condition for a state with content to refer to certain things. When the state has a correctness or fulfilment condition, its correctness is determined by whether its referents have the properties the content specifies for them. It leaves open the possibility that unconscious states, as well as conscious states, have content. It equally allows the states identified by an empirical, computational psychology to have content. A correct philosophical understanding of this general notion of content is fundamental not only to the philosophy of mind and psychology, but also to the theory of knowledge and to metaphysics.

There is a long-standing tradition that emphasizes that the reason-giving relation is a logical or conceptual one. One way of bringing out the nature of this conceptual link is by the construction of reasoning, linking the agent’s reason-providing states with the states for which they provide reasons. This reasoning is easiest to reconstruct in the case of reason for belief where the contents of the reason-providing beliefs inductively or deductively support the content of the rationalized belief. For example, I believe my colleague is in her room now, and my reasons are (1) she usually has a meeting in her room at 9:30 on Mondays and (2) it is to accept it as true, and it is relative to the objective of reaching truth that the rationalizing relations between contents are set for belief. They must be such that the truth of the premises makes likely the truth of the conclusion.

The causal explanatorial approach to reason-giving explanations also requires an account of the intentional content of our psychological states, which makes it possible for such content to be doing such work. It also provides a motivation for the reduction of intentional characterization as to extensional ones, in an attempt to fit such intentional causality into a fundamentally materialist world picture. The very nature of the reason-giving relation, however, can be seen to render such reductive projects unrealizable. This, therefore, leaves causal theorists with the task of linking intentional and non-intentional levels of description in such a way as to accommodate intentional causality, without either over-determination or a miraculous coincidence of prediction from within distinct causally explanatorial frameworks.

The idea that mentality is physically realized is integral to the ‘functionalist’ conception of mentality, and this commits most functionalists to mind-body supervenience in one form or another. As a theory of mind, supervenience of the mental - in the form of strong supervenience, or at least global supervenience - is arguably a minimum commitment of physicalism. But can we think of the thesis of mind-body supervenience itself as a theory of the mind-body relation - that is, as a solution to the mind-body problem?

A supervenience claim consists of covariance and a claim of dependence e (leaving aside the controversial claim of non-reducibility). This means that the thesis th at the mental supervenience on the physical amounts to the conjunction of the two claims (1) strong or global supervenience, and (2) the mental depends on the physical. However, the fact that the thesis says nothing about just what kind of dependence is involved in mind-body supervenience. When you compare the supervenience thesis with the standard positions on the mind-body problem, you are struck by what the supervenience thesis does not say. For each of the classic mind-body theories has something to say, not necessarily anything veery plausible, about the kind of dependence that characterizes the mind-body relationship. According to epiphenomenalism, for example, the dependence is one of causal dependence is one of casual dependence: On logical behaviourism, dependence is rooted in meaning dependence, or definability: On the standard type physicalism, the dependence is one that is involved in the dependence of macro-properties and son forth. Even Wilhelm Gottfried Leibniz (1646-1716) and Nicolas Malebranche (1638-1715) had something to say about this: The observed property convariation is due not to a direct dependancy relation between mind and body but rather to divine plans and interventions. That is, mind-body convariation was explained in terms of their dependence on a third factor - a sort of ‘common cause’ explanation.

It would seem that any serious theory addressing the mind-body problem must say something illuminating about the nature of psychophysical dependence, or why, contrary to common belief, there is no dependence. However, there is reason to think that ‘supervenient dependence’ does not signify a special type of dependence reflation. This is evident when we reflect on the varieties of ways in which we could explain the supervenience relation holds in a given case. For example, consider the supervenience of the moral on the descriptive the ethical naturalist will explain this on the basis of definability: The ethical intuitionist will say that the supervenience, and also the dependence, seems the brute fact that you discern through moral intuition. And the prescriptivist will attribute the supervenience to some form of consistency requirement on the language of evaluating and prescription. And distinct from all of these is mereological supervenience, namely the supervenience of properties of a whole on properties and relations of its parts. What all this shows is that there is no single type of dependence relation common to all cases of supervenience: Supervenience holds in different cases for different reasons, and does not represent a type of dependence that can be put alongside causal dependence, meaning dependence, mereological dependence and so forth.

If this is right, the supervenience thesis concerning the mental does not constitute an explanatory account of the mind-body relation, on a par with the classic alternatives on the mind-body problem. It is merely the claim that the mental covaried in a systematic way with the physical, an that this is due to a certain dependence relation yet to be specified and explained. In this sense, the supervenience thesis states the mind-bod y problem than offering a solution to it.

There seems to be a promising strategy for turning the supervenience thesis into a more substantive theory of mind, and it is this: To explicate mind-body supervenience as a special case of mereological supervenience - that is, the dependence of the properties of a whole on the properties and relations characterizing its proper parts. Mereological dependence does seem to be a special form of dependence that is metaphysical and highly important. If one takes this approach, one would have to explain psychological properties as macroproperties of a whole organism that covary, in appropriate ways, with its microproperties, i.e., the way its constituents, tissue, and do on, are organized and function. This more specific supervenience thesis may well be a serious theory of the mind-body relation that can compete with the classic options in the field.

Previously, our considerations had fallen to arrange in making progress in the betterment of an understanding, fixed on or upon the alternatives as to be taken, accepted or adopted, even to bring into being by mental or physical selection, among alternates that generally are in agreement. These are minded in the reappearance of confronting or agreeing with solutions precedently recognized. That is of saying, whether or not this is plausible (that is a separate question), it would be no more logically puzzling than the idea that one could not have any propositional attitude unless one had one’s with certain sorts of contents. Tyler Burge’s insight is partly determined by the meanings of one’s words in one’s linguistic community. Burge (1979) is perfectly consistent with any intention-based semantics, reduction of the semantic to the psychological. Nevertheless, there is reason to be sceptical of the intention-based semantic programme. First, no intention-based semantic theorist has succeeded in stating a sufficient condition for more difficult task of starting a necessary-and-sufficient condition. And is a plausible explanation of this failure is that what typically makes an utterance an act of speaker meaning is the speaker’s intention to be meaning or saying something, where the concept of meaning or saying used in the content of the intention is irreducibly semantic. Second, whether or not an intention-based semantic way of accounting for the actual-language relation in terms of speaker meaning. The essence of the intention-based semantic approach is that sentences used as conventional devices for making known a speaker’s communicative understanding is an inferential process wherein a hearer perceives an utterance and, thanks to being party to relevant conventions or practices, infers the speaker’s communicative intentions. Yet it appears that this inferential model is subject to insuperable epistemological difficulties, and. Third, there is no pressing reason to think that the semantic needs to be definable in terms of the psychological. Many intention-based semantic theorists have been motivated by a strong version of physicalism which requires the reduction of all intentional properties (i.e., all semantic and propositional-attitude properties) to physical or at least topic-neutral, or functional, properties, for it is plausible that there could be no reduction to the semantic and the psychological to the physical without a prior reduction of the semantic to the psychological. But it is arguable that such a strong version of physicalism is not what is required in order to fit the intentional into the natural order.

What is more, in the dependence of thought on language for which this claim is that propositional attitudes are relations to linguistic items which obtain, at least, partially, by virtue of the content those items have among language users. Thus, position does not imply that believers have to be language users, but it does make language an essential ingredient in the concept of belief. The position is motivated by two considerations (a) The supposition that believing is a relation to things that believing is a relation to things believed, for which of things have truth values and stand in logical relations to one another, and (b) The desires not to take things believed to be propositions - abstract things believed to be propositions - abstract, mind - and essentially the truth conditions that have. Now the tenet )a) is well motivated: The relational construal of propositional attitude s is probably the best way to account for the quantitative in ‘Harvey believes something nasty about you’. But there are probable mistakes with taking linguistic items, rather than propositions, as the objects of belief In the first place, If Harvey believes that Flounders snore’ is represented along the lines that of (‘Harvey, but flounder snore’), then one could know the truth expressed by the sentience about Harvey without knowing the content of his beliefs: For one could know that he stands in the belief relation to ‘flounders snore’ without knowing its content. This is unacceptable, as in the second place, if Harvey believes that flounders snore, then what he believes that flounders snore, then what he believes - the reference of ‘that flounders snore’ - is that flounders snore. But what is this thing that flounders snore? well, it is abstract, in that it has no spatial location. It is mind and language independent, in that it exists in possible worlds for which there are neither thinkers nor speakers: and, necessarily, it is true if flounders snore. In short, it is a proposition - an abstract mind, and language-independent thing that has a truth condition and has essentially the truth condition it has.

A more plausible way that thought depend s on language is suggested b y the topical thesis that we think in a ‘language of thought’. On one reading, this is nothing more than the vague idea that the neural states that realize our thoughts ‘have elements and structure in a way that is analogous to the way in which sentences have elements and structure’. Nonetheless, we can get a more literal rendering by relating it to the abstract conception of languages already recommended. On this conception, a language is a function from ‘expressions’ - sequences of marks or sounds or neural states or whatever - onto meaning, for which meanings will include the propositions of our propositional altitudes relations relate us to. we could then read the language of though t hypothesis as the claim that having propositional altitudes require s standing in a certain relation to a language whose expressions are neural state. There would now be more than one ‘actualized-language relations. The one earlier of mention, the one discussed earlier might be better called the ‘public-language relation’. Since the abstract notion of a language ha been so weakly construed. It is hard to see how the minimal language-of-thought proposal just sketched could fail to be true. At the same time, it has been given no interesting work to do. In trying to give it more interesting work, further dependencies of thought on language might come into play. For example, it has been claimed that the language of thought of a claim that the language of thought of a public-language user is the public language she uses: Her neural sentences are related to her spoken and written sentences in something like the way the written sentences are related to her spoken sentences. For another example, I that it might be claimed that even if one’s language of thought is something like the way her written sentences are related to he r spoken sentences. For example, it might be claimed that even if one’s language of thought is distinct from one’s public language, the language -of thought relations makes presuppositions about the public-language relations in way that make the content of one’s words in one’s public language community.

No comments:

Post a Comment