June 23, 2010

-page 77-

The Representational Theory of Mind, defines such intentional mental states as relations to mental representations, and explains the intentionality of the former in terms of the semantic properties of the latter. For example, to believe that Elvis is dead is to be appropriately related to a mental representation whose propositional content is that Elvis is dead. (The desire that Elvis be dead, the fear that he is dead, the regret that he is dead, etc., involve different relations to the same mental representation.) To perceive a strawberry is to have a sensory experience of some kind which is appropriately related to (e.g., caused by) the strawberry Representational theory of mind also understands mental processes such as thinking, reasoning and imagining as sequences of intentional mental states. For example, to imagine the moon rising over a mountain is to entertain a series of mental images of the moon (and a mountain). To infer a proposition q from the propositions p and if 'p' then 'q' is (among other things) to have a sequence of thoughts of the form 'p', 'if p' then 'q', 'q'.


Contemporary philosophers of mind have typically supposed (or at least hoped) that the mind can be naturalized - i.e., that all mental facts have explanations in the terms of natural science. This assumption is shared within cognitive science, which attempts to provide accounts of mental states and processes in terms (ultimately) of features of the brain and central nervous system. In the course of doing so, the various sub-disciplines of cognitive science (including cognitive and computational psychology and cognitive and computational neuroscience) postulate a number of different kinds of structures and processes, many of which are not directly implicated by mental states and processes as commonsensical conceived. There remains, however, a shared commitment to the idea that mental states and processes are to be explained in terms of mental representations.

In philosophy, recent debates about mental representation have centred around the existence of propositional attitudes (beliefs, desires, etc.) and the determination of their contents (how they come to be about what they are about), and the existence of phenomenal properties and their relation to the content of thought and perceptual experience. Within cognitive science itself, the philosophically relevant debates have been focussed on the computational architecture of the brain and central nervous system, and the compatibility of scientific and commonsense accounts of mentality.

Intentional Realists such as Dretske (e.g., 1988) and Fodor (e.g., 1987) note that the generalizations we apply in everyday life in predicting and explaining each other's behaviour (often collectively referred to as ‘folk psychology’) are both remarkably successful and indispensable. What a person believes, doubts, desires, fears, etc. is a highly reliable indicator of what that person will do; and we have no other way of making sense of each other's behaviour than by ascribing such states and applying the relevant generalizations. We are thus committed to the basic truth of commonsense psychology and, hence, to the existence of the states its generalizations refer to. (Some realists, such as Fodor, also hold that commonsense psychology will be vindicated by cognitive science, given that propositional attitudes can be construed as computational relations to mental representations.)

Intentional Eliminativists, such as Churchland, (perhaps) Dennett and (at one time) Stich argue that no such things as propositional attitudes (and their constituent representational states) are implicated by the successful explanation and prediction of our mental lives and behaviour. Churchland denies that the generalizations of commonsense propositional-attitude psychology are true. He (1981) argues that folk psychology is a theory of the mind with a long history of failure and decline, and that it resists incorporation into the framework of modern scientific theories (including cognitive psychology). As such, it is comparable to alchemy and phlogiston theory, and ought to suffer a comparable fate. Commonsense psychology is false, and the states (and representations) it postulates simply don't exist. (It should be noted that Churchland is not an eliminativist about mental representation tout court.

Dennett (1987) grants that the generalizations of commonsense psychology are true and indispensable, but denies that this is sufficient reason to believe in the entities they appear to refer to. He argues that to give an intentional explanation of a system's behaviour is merely to adopt the ‘intentional stance’ toward it. If the strategy of assigning contentful states to a system and predicting and explaining its behaviour (on the assumption that it is rational - i.e., that it behaves as it should, given the propositional attitudes it should have in its environment) is successful, then the system is intentional, and the propositional-attitude generalizations we apply to it are true. But there is nothing more to having a propositional attitude than this.

Though he has been taken to be thus claiming that intentional explanations should be construed instrumentally, Dennett (1991) insists that he is a ‘moderate’ realist about propositional attitudes, since he believes that the patterns in the behaviour and behavioural dispositions of a system on the basis of which we (truly) attribute intentional states to it are objectively real. In the event that there are two or more explanatorily adequate but substantially different systems of intentional ascriptions to an individual, however, Dennett claims there is no fact of the matter about what the system believes (1987, 1991). This does suggest an irrealism at least with respect to the sorts of things Fodor and Dretske take beliefs to be; though it is not the view that there is simply nothing in the world that makes intentional explanations true.

(Davidson 1973, 1974 and Lewis 1974 also defend the view that what it is to have a propositional attitude is just to be interpretable in a particular way. It is, however, not entirely clear whether they intend their views to imply irrealism about propositional attitudes.). Stich (1983) argues that cognitive psychology does not (or, in any case, should not) taxonomize mental states by their semantic properties at all, since attribution of psychological states by content is sensitive to factors that render it problematic in the context of a scientific psychology. Cognitive psychology seeks causal explanations of behaviour and cognition, and the causal powers of a mental state are determined by its intrinsic ‘structural’ or ‘syntactic’ properties. The semantic properties of a mental state, however, are determined by its extrinsic properties - e.g., its history, environmental or intra-mental relations. Hence, such properties cannot figure in causal-scientific explanations of behaviour. (Fodor 1994 and Dretske 1988 are realist attempts to come to grips with some of these problems.) Stich proposes a syntactic theory of the mind, on which the semantic properties of mental states play no explanatory role.

It is a traditional assumption among realists about mental representations that representational states come in two basic varieties (Boghossian 1995). There are those, such as thoughts, which are composed of concepts and have no phenomenal (‘what-it's-like’) features (‘qualia’), and those, such as sensory experiences, which have phenomenal features but no conceptual constituents. (Non-conceptual content is usually defined as a kind of content that states of a creature lacking concepts might nonetheless enjoy. On this taxonomy, mental states can represent either in a way analogous to expressions of natural languages or in a way analogous to drawings, paintings, maps or photographs. (Perceptual states such as seeing that something is blue, are sometimes thought of as hybrid states, consisting of, for example, a Non-conceptual sensory experience and a thought, or some more integrated compound of sensory and conceptual components.)

Some historical discussions of the representational properties of mind (e.g., Aristotle 1984, Locke 1689/1975, Hume 1739/1978) seem to assume that Non-conceptual representations - percepts (‘impressions’), images (‘ideas’) and the like - are the only kinds of mental representations, and that the mind represents the world in virtue of being in states that resemble things in it. On such a view, all representational states have their content in virtue of their phenomenal features. Powerful arguments, however, focussing on the lack of generality (Berkeley 1975), ambiguity (Wittgenstein 1953) and non-compositionality (Fodor 1981) of sensory and imagistic representations, as well as their unsuitability to function as logical (Frége 1918/1997, Geach 1957) or mathematical (Frége 1884/1953) concepts, and the symmetry of resemblance (Goodman 1976), convinced philosophers that no theory of mind can get by with only Non-conceptual representations construed in this way.

Contemporary disagreement over Non-conceptual representation concerns the existence and nature of phenomenal properties and the role they play in determining the content of sensory experience. Dennett (1988), for example, denies that there are such things as qualia at all; while Brandom (2002), McDowell (1994), Rey (1991) and Sellars (1956) deny that they are needed to explain the content of sensory experience. Among those who accept that experiences have phenomenal content, some (Dretske, Lycan, Tye) argue that it is reducible to a kind of intentional content, while others (Block, Loar, Peacocke) argue that it is irreducible.

There has also been dissent from the traditional claim that conceptual representations (thoughts, beliefs) lack phenomenology. Chalmers (1996), Flanagan (1992), Goldman (1993), Horgan and Tiensen (2003), Jackendoff (1987), Levine (1993, 1995, 2001), McGinn (1991), Pitt (2004), Searle (1992), Siewert (1998) and Strawson (1994), claim that purely symbolic (conscious) representational states themselves have a (perhaps proprietary) phenomenology. If this claim is correct, the question of what role phenomenology plays in the determination of content reprises for conceptual representation; and the eliminativist ambitions of Sellars, Brandom, Rey, would meet a new obstacle. (It would also raise prima face problems for reductivist’s representationalism

The representationalist thesis is often formulated as the claim that phenomenal properties are representational or intentional. However, this formulation is ambiguous between a reductive and a non-deductive claim (though the term ‘representationalism’ is most often used for the reductive claim). On one hand, it could mean that the phenomenal content of an experience is a kind of intentional content (the properties it represents). On the other, it could mean that the (irreducible) phenomenal properties of an experience determine an intentional content. Representationalists such as Dretske, Lycan and Tye would assent to the former claim, whereas phenomenalists such as Block, Chalmers, Loar and Peacocke would assent to the latter. (Among phenomenalists, there is further disagreement about whether qualia are intrinsically representational (Loar) or not (Block, Peacocke).

Most (reductive) representationalists are motivated by the conviction that one or another naturalistic explanation of intentionality is, in broad outline, correct, and by the desire to complete the naturalization of the mental by applying such theories to the problem of phenomenality. (Needless to say, most phenomenalists (Chalmers is the major exception) are just as eager to naturalize the phenomenal - though not in the same way.)

The main argument for representationalism appeals to the transparency of experience (cf. Tye 2000: 45-51). The properties that characterize what it's like to have a perceptual experience are presented in experience as properties of objects perceived: in attending to an experience, one seems to ‘see through it’ to the objects and properties it is experiences of. They are not presented as properties of the experience itself. If nonetheless they were properties of the experience, perception would be massively deceptive. But perception is not massively deceptive. According to the representationalist, the phenomenal character of an experience is due to its representing objective, non-experiential properties. (In veridical perception, these properties are locally instantiated; in illusion and hallucination, they are not.) On this view, introspection is indirect perception: one comes to know what phenomenal features one's experience has by coming to know what objective features it represents.

In order to account for the intuitive differences between conceptual and sensory representations, representationalists appeal to their structural or functional differences. Dretske (1995), for example, distinguishes experiences and thoughts on the basis of the origin and nature of their functions: an experience of a property 'P' is a state of a system whose evolved function is to indicate the presence of 'P' in the environment; a thought representing the property 'P', on the other hand, is a state of a system whose assigned (learned) function is to calibrate the output of the experiential system. Rey (1991) takes both thoughts and experiences to be relations to sentences in the language of thought, and distinguishes them on the basis of (the functional roles of) such sentences' constituent predicates. Lycan (1987, 1996) distinguishes them in terms of their functional-computational profiles. Tye (2000) distinguishes them in terms of their functional roles and the intrinsic structure of their vehicles: thoughts are representations in a language-like medium, whereas experiences are image-like representations consisting of ‘symbol-filled arrays.’ (the account of mental images in Tye 1991.)

Phenomenalists tend to make use of the same sorts of features (function, intrinsic structure) in explaining some of the intuitive differences between thoughts and experiences; but they do not suppose that such features exhaust the differences between phenomenal and non-phenomenal representations. For the phenomenalism, it is the phenomenal properties of experiences - qualia themselves - that constitute the fundamental difference between experience and thought. Peacocke (1992), for example, develops the notion of a perceptual ‘scenario’ (an assignment of phenomenal properties to coordinates of a three-dimensional egocentric space), whose content is ‘correct’ (a semantic property) if in the corresponding ‘scene’ (the portion of the external world represented by the scenario) properties are distributed as their phenomenal analogues are in the scenario.

Another sort of representation championed by phenomenalists (e.g., Block, Chalmers (2003) and Loar (1996)) is the ‘phenomenal concept’ - a conceptual/phenomenal hybrid consisting of a phenomenological ‘sample’ (an image or an occurrent sensation) integrated with (or functioning as) a conceptual component. Phenomenal concepts are postulated to account for the apparent fact (among others) that, as McGinn (1991) puts it, ‘you cannot form [introspective] concepts of conscious properties unless you yourself instantiate those properties.’ One cannot have a phenomenal concept of a phenomenal property 'P', and, hence, phenomenal beliefs about P, without having experience of 'P', because 'P' itself is (in some way) constitutive of the concept of 'P'. (Jackson 1982, 1986 and Nagel 1974.)

Though imagery has played an important role in the history of philosophy of mind, the important contemporary literature on it is primarily psychological. In a series of psychological experiments done in the 1970s (summarized in Kosslyn 1980 and Shepard and Cooper 1982), subjects' response time in tasks involving mental manipulation and examination of presented figures was found to vary in proportion to the spatial properties (size, orientation, etc.) of the figures presented. The question of how these experimental results are to be explained has kindled a lively debate on the nature of imagery and imagination.

Kosslyn (1980) claims that the results suggest that the tasks were accomplished via the examination and manipulation of mental representations that themselves have spatial properties - i.e., pictorial representations, or images. Others, principally Pylyshyn (1979, 1981, 2003), argue that the empirical facts can be explained in terms exclusively of discursive, or propositional representations and cognitive processes defined over them. (Pylyshyn takes such representations to be sentences in a language of thought.)

The idea that pictorial representations are literally pictures in the head is not taken seriously by proponents of the pictorial view of imagery The claim is, rather, that mental images represent in a way that is relevantly like the way pictures represent. (Attention has been focussed on visual imagery - hence the designation ‘pictorial’; though of course there may imagery in other modalities - auditory, olfactory, etc. - as well.)

The distinction between pictorial and discursive representation can be characterized in terms of the distinction between analog and digital representation (Goodman 1976). This distinction has itself been variously understood (Fodor & Pylyshyn 1981, Goodman 1976, Haugeland 1981, Lewis 1971, McGinn 1989), though a widely accepted construal is that analog representation is continuous (i.e., in virtue of continuously variable properties of the representation), while digital representation is discrete (i.e., in virtue of properties a representation either has or doesn't have) (Dretske 1981). (An analog/digital distinction may also be made with respect to cognitive processes. (Block 1983.)) On this understanding of the analog/digital distinction, imagistic representations, which represent in virtue of properties that may vary continuously (such as being more or less bright, loud, vivid, etc.), would be analog, while conceptual representations, whose properties do not vary continuously (a thought cannot be more or less about Elvis: either it is or it is not) would be digital.

It might be supposed that the pictorial/discursive distinction is best made in terms of the phenomenal/nonphenomenal distinction, but it is not obvious that this is the case. For one thing, there may be nonphenomenal properties of representations that vary continuously. Moreover, there are ways of understanding pictorial representation that presuppose neither phenomenality nor analogicity. According to Kosslyn (1980, 1982, 1983), a mental representation is ‘quasi-pictorial’ when every part of the representation corresponds to a part of the object represented, and relative distances between parts of the object represented are preserved among the parts of the representation. But distances between parts of a representation can be defined functionally rather than spatially - for example, in terms of the number of discrete computational steps required to combine stored information about them. (Rey 1981.)

Tye (1991) proposes a view of images on which they are hybrid representations, consisting both of pictorial and discursive elements. On Tye's account, images are ‘(labelled) interpreted symbol-filled arrays.’ The symbols represent discursively, while their arrangement in arrays has representational significance (the location of each ‘cell’ in the array represents a specific viewer-centred 2-D location on the surface of the imagined object)

The contents of mental representations are typically taken to be abstract objects (properties, relations, propositions, sets, etc.). A pressing question, especially for the naturalist, is how mental representations come to have their contents. Here the issue is not how to naturalize content (abstract objects can't be naturalized), but, rather, how to provide a naturalistic account of the content-determining relations between mental representations and the abstract objects they express. There are two basic types of contemporary naturalistic theories of content-determination, causal-informational and functional.

Causal-informational theories (Dretske 1981, 1988, 1995) hold that the content of a mental representation is grounded in the information it carries about what does (Devitt 1996) or would (Fodor 1987, 1990) cause it to occur. There is, however, widespread agreement that causal-informational relations are not sufficient to determine the content of mental representations. Such relations are common, but representation is not. Tree trunks, smoke, thermostats and ringing telephones carry information about what they are causally related to, but they do not represent (in the relevant sense) what they carry information about. Further, a representation can be caused by something it does not represent, and can represent something that has not caused it.

The main attempts to specify what makes a causal-informational state a mental representation are Asymmetric Dependency Theories (e.g., Fodor 1987, 1990, 1994) and Teleological Theories (Fodor 1990, Millikan 1984, Papineau 1987, Dretske 1988, 1995). The Asymmetric Dependency Theory distinguishes merely informational relations from representational relations on the basis of their higher-order relations to each other: informational relations depend upon representational relations, but not vice-versa. For example, if tokens of a mental state type are reliably caused by horses, cows-on-dark-nights, zebras-in-the-mist and Great Danes, then they carry information about horses, etc. If, however, such tokens are caused by cows-on-dark-nights, etc. because they were caused by horses, but not vice versa, then they represent horses.

According to Teleological Theories, representational relations are those a representation-producing mechanism has the selected (by evolution or learning) function of establishing. For example, zebra-caused horse-representations do not mean zebra, because the mechanism by which such tokens are produced has the selected function of indicating horses, not zebras. The horse-representation-producing mechanism that responds to zebras is malfunctioning.

Functional theories (Block 1986, Harman 1973), hold that the content of a mental representation is grounded in its (causal computational, inferential) relations to other mental representations. They differ on whether relata should include all other mental representations or only some of them, and on whether to include external states of affairs. The view that the content of a mental representation is determined by its inferential/computational relations with all other representations is holism; the view it is determined by relations to only some other mental states is localism (or molecularism). (The view that the content of a mental state depends on none of its relations to other mental states is atomism.) Functional theories that recognize no content-determining external relata have been called solipsistic (Harman 1987). Some theorists posit distinct roles for internal and external connections, the former determining semantic properties analogous to sense, the latter determining semantic properties analogous to reference (McGinn 1982, Sterelny 1989)

(Reductive) representationalists (Dretske, Lycan, Tye) usually take one or another of these theories to provide an explanation of the (Non-conceptual) content of experiential states. They thus tend to be Externalists about phenomenological as well as conceptual content. Phenomenalists and non-deductive representationalists (Block, Chalmers, Loar, Peacocke, Siewert), on the other hand, take it that the representational content of such states is (at least in part) determined by their intrinsic phenomenal properties. Further, those who advocate a phenomenology-based approach to conceptual content (Horgan and Tiensen, Loar, Pitt, Searle, Siewert) also seem to be committed to internalist individuation of the content (if not the reference) of such states.

Generally, those who, like informational theorists, think relations to one's (natural or social) environment are (at least partially) determinative of the content of mental representations are Externalists (e.g., Burge 1979, 1986, McGinn 1977, Putnam 1975), whereas those who, like some proponents of functional theories, think representational content is determined by an individual's intrinsic properties alone, are internalists (or individualists; cf. Putnam 1975, Fodor 1981)

This issue is widely taken to be of central importance, since psychological explanation, whether commonsense or scientific, is supposed to be both causal and content-based. (Beliefs and desires cause the behaviours they do because they have the contents they do. For example, the desire that one have a beer and the beliefs that there is beer in the refrigerator and that the refrigerator is in the kitchen may explain one's getting up and going to the kitchen.) If, however, a mental representation's having a particular content is due to factors extrinsic to it, it is unclear how its having that content could determine its causal powers, which, arguably, must be intrinsic. Some who accept the standard arguments for externalism have argued that internal factors determine a component of the content of a mental representation. They say that mental representations have both ‘narrow’ content (determined by intrinsic factors) and ‘wide’ or ‘broad’ content (determined by narrow content plus extrinsic factors). (This distinction may be applied to the sub-personal representations of cognitive science as well as to those of commonsense psychology.

Narrow content has been variously construed. Putnam (1975), Fodor (1982)), and Block (1986), for example, seem to understand it as something like de dicto content (i.e., Frégean sense, or perhaps character, à la Kaplan 1989). On this construal, narrow content is context-independent and directly expressible. Fodor (1987) and Block (1986), however, have also characterized narrow content as radically inexpressible. On this construal, narrow content is a kind of proto-content, or content-determinant, and can be specified only indirectly, via specifications of context/wide-content pairings. On both construal, narrow contents are characterized as functions from context to (wide) content. The narrow content of a representation is determined by properties intrinsic to it or its possessor such as its syntactic structure or its intra-mental computational or inferential role (or its phenomenology.

Burge (1986) has argued that causation-based worries about externalist individuation of psychological content, and the introduction of the narrow notion, are misguided. Fodor (1994, 1998) has more recently urged that a scientific psychology might not need narrow content in order to supply naturalistic (causal) explanations of human cognition and action, since the sorts of cases they were introduced to handle, viz., Twin-Earth cases and Frége cases, are either nomologically impossible or dismissible as exceptions to non-strict psychological laws.

The leading contemporary version of the Representational Theory of Mind, the Computational Theory of Mind (CTM), claims that the brain is a kind of computer and that mental processes are computations. According to the computational theory of mind, cognitive states are constituted by computational relations to mental representations of various kinds, and cognitive processes are sequences of such states. The computational theory of mind and the representational theory of mind, may by attempting to explain all psychological states and processes in terms of mental representation. In the course of constructing detailed empirical theories of human and animal cognition and developing models of cognitive processes implementable in artificial information processing systems, cognitive scientists have proposed a variety of types of mental representations. While some of these may be suited to be mental relata of commonsense psychological states, some - so-called ‘subpersonal’ or ‘sub-doxastic’ representations - are not. Though many philosophers believe that computational theory of mind can provide the best scientific explanations of cognition and behaviour, there is disagreement over whether such explanations will vindicate the commonsense psychological explanations of prescientific representational theory of mind.

According to Stich's (1983) Syntactic Theory of Mind, for example, computational theories of psychological states should concern themselves only with the formal properties of the objects those states are relations to. Commitment to the explanatory relevance of content, however, is for most cognitive scientists fundamental (Fodor 1981, Pylyshyn 1984, Von Eckardt 1993). That mental processes are computations, which computations are rule-governed sequences of semantically evaluable objects, and that the rules apply to the symbols in virtue of their content, are central tenets of mainstream cognitive science.

Explanations in cognitive science appeal to a many different kinds of mental representation, including, for example, the ‘mental models’ of Johnson-Laird 1983, the ‘retinal arrays,’ ‘primal sketches’ and ‘2½ -D sketches’ of Marr 1982, the ‘frames’ of Minsky 1974, the ‘sub-symbolic’ structures of Smolensky 1989, the ‘quasi-pictures’ of Kosslyn 1980, and the ‘interpreted symbol-filled arrays’ of Tye 1991 - in addition to representations that may be appropriate to the explanation of commonsense psychological states. Computational explanations have been offered of, among other mental phenomena, belief (Fodor 1975, Field 1978), visual perception (Marr 1982, Osherson, et al. 1990), rationality (Newell and Simon 1972, Fodor 1975, Johnson-Laird and Wason 1977), language learning and (Chomsky 1965, Pinker 1989), and musical comprehension (Lerdahl and Jackendoff 1983).

A fundamental disagreement among proponents of computational theory of mind concerns the realization of personal-level representations (e.g., thoughts) and processes (e.g., inferences) in the brain. The central debate here is between proponents of Classical Architectures and proponents of Conceptionist Architectures.

The classicists (e.g., Turing 1950, Fodor 1975, Fodor and Pylyshyn 1988, Marr 1982, Newell and Simon 1976) hold that mental representations are symbolic structures, which typically have semantically evaluable constituents, and that mental processes are rule-governed manipulations of them that are sensitive to their constituent structure. The connectionists (e.g., McCulloch & Pitts 1943, Rumelhart 1989, Rumelhart and McClelland 1986, Smolensky 1988) hold that mental representations are realized by patterns of activation in a network of simple processors (‘nodes’) and that mental processes consist of the spreading activation of such patterns. The nodes themselves are, typically, not taken to be semantically evaluable; nor do the patterns have semantically evaluable constituents. (Though there are versions of Connectionism - ‘localist’ versions - on which individual nodes are taken to have semantic properties (e.g., Ballard 1986, Ballard & Hayes 1984).) It is arguable, however, that localist theories are neither definitive nor representative of the Conceptionist program (Smolensky 1988, 1991, Chalmers 1993).

Classicists are motivated (in part) by properties thought seems to share with language. Fodor's Language of Thought Hypothesis (LOTH) (Fodor 1975, 1987), according to which the system of mental symbols constituting the neural basis of thought is structured like a language, provides a well-worked-out version of the classical approach as applied to commonsense psychology. According to the language of thought hypothesis, the potential infinity of complex representational mental states is generated from a finite stock of primitive representational states, in accordance with recursive formation rules. This combinatorial structure accounts for the properties of productivity and systematicity of the system of mental representations. As in the case of symbolic languages, including natural languages (though Fodor does not suppose either that the language of thought hypothesis explains only linguistic capacities or that only verbal creatures have this sort of cognitive architecture), these properties of thought are explained by appeal to the content of the representational units and their combinability into contentful complexes. That is, the semantics of both language and thought is compositional: the content of a complex representation is determined by the contents of its constituents and their structural configuration.

Connectionists are motivated mainly by a consideration of the architecture of the brain, which apparently consists of layered networks of interconnected neurons. They argue that this sort of architecture is unsuited to carrying out classical serial computations. For one thing, processing in the brain is typically massively parallel. In addition, the elements whose manipulation drives computation in Conceptionist networks (principally, the connections between nodes) are neither semantically compositional nor semantically evaluable, as they are on the classical approach. This contrast with classical computationalism is often characterized by saying that representation is, with respect to computation, distributed as opposed to local: representation is local if it is computationally basic; and distributed if it is not. (Another way of putting this is to say that for classicists mental representations are computationally atomic, whereas for connectionists they are not.)

Moreover, connectionists argue that information processing as it occurs in Conceptionist networks more closely resembles some features of actual human cognitive functioning. For example, whereas on the classical view learning involves something like hypothesis formation and testing (Fodor 1981), on the Conceptionist model it is a matter of evolving distribution of ‘weight’ (strength) on the connections between nodes, and typically does not involve the formulation of hypotheses regarding the identity conditions for the objects of knowledge. The Conceptionist network is ‘trained up’ by repeated exposure to the objects it is to learn to distinguish; and, though networks typically require many more exposures to the objects than do humans, this seems to model at least one feature of this type of human learning quite well.

Further, degradation in the performance of such networks in response to damage is gradual, not sudden as in the case of a classical information processor, and hence more accurately models the loss of human cognitive function as it typically occurs in response to brain damage. It is also sometimes claimed that Conceptionist systems show the kind of flexibility in response to novel situations typical of human cognition - situations in which classical systems are relatively ‘brittle’ or ‘fragile.’

Some philosophers have maintained that Connectionism entails that there are no propositional attitudes. Ramsey, Stich and Garon (1990) have argued that if Conceptionist models of cognition are basically correct, then there are no discrete representational states as conceived in ordinary commonsense psychology and classical cognitive science. Others, however (e.g., Smolensky 1989), hold that certain types of higher-level patterns of activity in a neural network may be roughly identified with the representational states of commonsense psychology. Still others (e.g., Fodor & Pylyshyn 1988, Heil 1991, Horgan and Tienson 1996) argue that language-of-thought style representation is both necessary in general and realizable within Conceptionist architectures. (MacDonald & MacDonald 1995 collects the central contemporary papers in the classicist/Conceptionist debate, and provides useful introductory material as well.

Whereas Stich (1983) accepts that mental processes are computational, but denies that computations are sequences of mental representations, others accept the notion of mental representation, but deny that computational theory of mind provides the correct account of mental states and processes.

Van Gelder (1995) denies that psychological processes are computational. He argues that cognitive systems are dynamic, and that cognitive states are not relations to mental symbols, but quantifiable states of a complex system consisting of (in the case of human beings) a nervous system, a body and the environment in which they are embedded. Cognitive processes are not rule-governed sequences of discrete symbolic states, but continuous, evolving total states of dynamic systems determined by continuous, simultaneous and mutually determining states of the systems' components. Representation in a dynamic system is essentially information-theoretic, though the bearers of information are not symbols, but state variables or parameters.

Horst (1996), on the other hand, argues that though computational models may be useful in scientific psychology, they are of no help in achieving a philosophical understanding of the intentionality of commonsense mental states. computational theory of mind attempts to reduce the intentionality of such states to the intentionality of the mental symbols they are relations to. But, Horst claims, the relevant notion of symbolic content is essentially bound up with the notions of convention and intention. So the computational theory of mind involves itself in a vicious circularity: the very properties that are supposed to be reduced are (tacitly) appealed to in the reduction.

To say that a mental object has semantic properties is, paradigmatically, to say that it may be about, or be true or false of, an object or objects, or that it may be true or false simpliciter. Suppose I think that ocelots take snuff. I am thinking about ocelots, and if what I think of them (that they take snuff) is true of them, then my thought is true. According to representational theory of mind such states are to be explained as relations between agents and mental representations. To think that ocelots take snuff is to token in some way a mental representation whose content is that ocelots take snuff. On this view, the semantic properties of mental states are the semantic properties of the representations they are relations to.

Linguistic acts seem to share such properties with mental states. Suppose I say that ocelots take snuff. I am talking about ocelots, and if what I say of them (that they take snuff) is true of them, then my utterance is true. Now, to say that ocelots take snuff is (in part) to utter a sentence that means that ocelots take snuff. Many philosophers have thought that the semantic properties of linguistic expressions are inherited from the intentional mental states they are conventionally used to express (Grice 1957, Fodor 1978, Schiffer1972/1988, Searle 1983). On this view, the semantic properties of linguistic expressions are the semantic properties of the representations that are the mental relata of the states they are conventionally used to express.

It is also widely held that in addition to having such properties as reference, truth-conditions and truth - so-called extensional properties - expressions of natural languages also have intensional properties, in virtue of expressing properties or propositions - i.e., in virtue of having meanings or senses, where two expressions may have the same reference, truth-conditions or truth value, yet express different properties or propositions (Frége 1892/1997). If the semantic properties of natural-language expressions are inherited from the thoughts and concepts they express (or vice versa, or both), then an analogous distinction may be appropriate for mental representations.

Søren Aabye Kierkegaard (1813-1855), a Danish religious philosopher, whose concern with individual existence, choice, and commitment profoundly influenced modern theology and philosophy, especially existentialism.

Søren Kierkegaard wrote of the paradoxes of Christianity and the faith required to reconcile them. In his book Fear and Trembling, Kierkegaard discusses Genesis 22, in which God commands Abraham to kill his only son, Isaac. Although God made an unreasonable and immoral demand, Abraham obeyed without trying to understand or justify it. Kierkegaard regards this ‘leap of faith’ as the essence of Christianity.

Kierkegaard was born in Copenhagen on May 15, 1813. His father was a wealthy merchant and strict Lutheran, whose gloomy, guilt-ridden piety and vivid imagination strongly influenced Kierkegaard. Kierkegaard studied theology and philosophy at the University of Copenhagen, where he encountered Hegelian philosophy and reacted strongly against it. While at the university, he ceased to practice Lutheranism and for a time led an extravagant social life, becoming a familiar figure in the theatrical and café society of Copenhagen. After his father's death in 1838, however, he decided to resume his theological studies. In 1840 he became engaged to the 17-year-old Regine Olson, but almost immediately he began to suspect that marriage was incompatible with his own brooding, complicated nature and his growing sense of a philosophical vocation. He abruptly broke off the engagement in 1841, but the episode took on great significance for him, and he repeatedly alluded to it in his books. At the same time, he realized that he did not want to become a Lutheran pastor. An inheritance from his father allowed him to devote himself entirely to writing, and in the remaining 14 years of his life he produced more than 20 books.

Kierkegaard's work is deliberately unsystematic and consists of essays, aphorisms, parables, fictional letters and diaries, and other literary forms. Many of his works were originally published under pseudonyms. He applied the term existential to his philosophy because he regarded philosophy as the expression of an intensely examined individual life, not as the construction of a monolithic system in the manner of the 19th-century German philosopher Georg Wilhelm Friedrich Hegel, whose work he attacked in Concluding Unscientific Postscript (1846; trans. 1941). Hegel claimed to have achieved a complete rational understanding of human life and history; Kierkegaard, on the other hand, stressed the ambiguity and paradoxical nature of the human situation. The fundamental problems of life, he contended, defy rational, objective explanation; the highest truth is subjective.

Kierkegaard maintained that systematic philosophy not only imposes a false perspective on human existence but that it also, by explaining life in terms of logical necessity, becomes a means of avoiding choice and responsibility. Individuals, he believed, create their own natures through their choices, which must be made in the absence of universal, objective standards. The validity of a choice can only be determined subjectively.

In his first major work, Either/Or (2 volumes, 1843; trans. 1944), Kierkegaard described two spheres, or stages of existence, that the individual may choose: the aesthetic and the ethical. The aesthetic way of life is a refined hedonism, consisting of a search for pleasure and a cultivation of mood. The aesthetic individual constantly seeks variety and novelty in an effort to stave off boredom but eventually must confront boredom and despair. The ethical way of life involves an intense, passionate commitment to duty, to unconditional social and religious obligations. In his later works, such as Stages on Life's Way (1845; trans. 1940), Kierkegaard discerned in this submission to duty a loss of individual responsibility, and he proposed a third stage, the religious, in which one submits to the will of God but in doing so finds authentic freedom. In Fear and Trembling (1846; trans. 1941) Kierkegaard focused on God's command that Abraham sacrifice his son Isaac (Genesis 22: 1-19), an act that violates Abraham's ethical convictions. Abraham proves his faith by resolutely setting out to obey God's command, even though he cannot understand it. This ‘suspension of the ethical,’ as Kierkegaard called it, allows Abraham to achieve an authentic commitment to God. To avoid ultimate despair, the individual must make a similar ‘leap of faith’ into a religious life, which is inherently paradoxical, mysterious, and full of risk. One is called to it by the feeling of dread (The Concept of Dread,1844; trans. 1944), which is ultimately a fear of nothingness.

Toward the end of his life Kierkegaard was involved in bitter controversies, especially with the established Danish Lutheran church, which he regarded as worldly and corrupt. His later works, such as The Sickness Unto Death (1849; trans. 1941), reflect an increasingly somber view of Christianity, emphasizing suffering as the essence of authentic faith. He also intensified his attack on modern European society, which he denounced in The Present Age (1846; trans. 1940) for its lack of passion and for its quantitative values. The stress of his prolific writing and of the controversies in which he engaged gradually undermined his health; in October 1855 he fainted in the street, and he died in Copenhagen on November 11, 1855.

Kierkegaard's influence was at first confined to Scandinavia and to German-speaking Europe, where his work had a strong impact on Protestant theology and on such writers as the 20th-century Austrian novelist Franz Kafka. As existentialism emerged as a general European movement after World War I, Kierkegaard's work was widely translated, and he was recognized as one of the seminal figures of modern culture.

Since scientists, during the nineteenth century were engrossed with uncovering the workings of external reality and seemingly knew of themselves that these virtually overflowing burdens of nothing, in that were about the physical substrates of human consciousness, the business of examining the distributive contribution in dynamic functionality and structural foundation of mind became the province of social scientists and humanists. Adolphe Quételet proposed a ‘social physics’ that could serve as the basis for a new discipline called sociology, and his contemporary Auguste Comte concluded that a true scientific understanding of the social reality was quite inevitable. Mind, in the view of these figures, was a separate and distinct mechanism subject to the lawful workings of a mechanical social reality.

More formal European philosophers, such as Immanuel Kant, sought to reconcile representations of external reality in mind with the motions of matter-based on the dictates of pure reason. This impulse was also apparent in the utilitarian ethics of Jerry Bentham and John Stuart Mill, in the historical materialism of Karl Marx and Friedrich Engels, and in the pragmatism of Charles Smith, William James and John Dewey. These thinkers were painfully aware, however, of the inability of reason to posit a self-consistent basis for bridging the gap between mind and matter, and each remains obliged to conclude that the realm of the mental exists only in the subjective reality of the individual.

The fatal flaw of pure reason is, of course, the absence of emotion, and purely explanations of the division between subjective reality and external reality, of which had limited appeal outside the community of intellectuals. The figure most responsible for infusing our understanding of the Cartesian dualism with contextual representation of our understanding with emotional content was the death of God theologian Friedrich Nietzsche 1844-1900. After declaring that God and ‘divine will’, did not exist, Nietzsche reified the ‘existence’ of consciousness in the domain of subjectivity as the ground for individual ‘will’ and summarily reducing all previous philosophical attempts to articulate the ‘will to truth’. The dilemma, forth in, had seemed to mean, by the validation, . . . as accredited for doing of science, in that the claim that Nietzsche’s earlier versions to the ‘will to truth’, disguises the fact that all alleged truths were arbitrarily created in the subjective reality of the individual and are expressed or manifesting the individualism of ‘will’.

In Nietzsche’s view, the separation between mind and matter is more absolute and total than previously been imagined. Based on the assumption that there is no really necessary correspondence between linguistic constructions of reality in human subjectivity and external reality, he deuced that we are all locked in ‘a prison house of language’. The prison as he concluded it, was also a ‘space’ where the philosopher can examine the ‘innermost desires of his nature’ and articulate a new message of individual existence founded on ‘will’.

Those who fail to enact their existence in this space, Nietzsche says, are enticed into sacrificing their individuality on the nonexistent altars of religious beliefs and democratic or socialists’ ideals and become, therefore, members of the anonymous and docile crowd. Nietzsche also invalidated the knowledge claims of science in the examination of human subjectivity. Science, he said. Is not exclusive to natural phenomenons and favors reductionistic examination of phenomena at the expense of mind? It also seeks to reduce the separateness and uniqueness of mind with mechanistic descriptions that disallow and basis for the free exercise of individual will.

Nietzsche’s emotionally charged defense of intellectual freedom and radial empowerment of mind as the maker and transformer of the collective fictions that shape human reality in a soulless mechanistic universe proved terribly influential on twentieth-century thought. Furthermore, Nietzsche sought to reinforce his view of the subjective character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. Through a curious course of events, attempted by Edmund Husserl 1859-1938, a German mathematician and a principal founder of phenomenology, wherefor to resolve this crisis resulted in a view of the character of consciousness that closely resembled that of Nietzsche.

The best-known disciple of Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existentialist Jean-Paul Sartre. The work of Husserl, Heidegger, and Sartre became foundational to that of the principal architects of philosophical postmodernism, and deconstructionist Jacques Lacan, Roland Barthes, Michel Foucault and Jacques Derrida. It obvious attribution of a direct linkage between the nineteenth-century crisis about the epistemological foundations of mathematical physics and the origin of philosophical postmodernism served to perpetuate the Cartesian two-world dilemma in an even more oppressive form. It also allows us better to understand the origins of cultural ambience and the ways in which they could resolve that conflict.

The mechanistic paradigms of the late in the nineteenth century was the one Einstein came to know when he studied physics. Most physicists believed that it represented an eternal truth, but Einstein was open to fresh ideas. Inspired by Mach’s critical mind, he demolished the Newtonian ideas of space and time and replaced them with new, ‘relativistic’ notions.

Jean-Paul Sartre (1905-1980), was a French philosopher, dramatist, novelist, and political journalist, who was a leading exponent of existentialism. Jean-Paul Sartre helped to develop existential philosophy through his writings, novels, and plays. Much of Sartre’s work focuses on the dilemma of choice faced by free individuals and on the challenge of creating meaning by acting responsibly in an indifferent world. In stating that ‘man is condemned to be free,’ Sartre reminds us of the responsibility that accompanies human decisions.

Sartre was born in Paris, June 21, 1905, and educated at the Écôle Normale Supérieure in Paris, the University of Fribourg in Switzerland, and the French Institute in Berlin. He taught philosophy at various lycées from 1929 until the outbreak of World War II, when he was called into military service. In 1940-41 he was imprisoned by the Germans; after his release, he taught in Neuilly, France, and later in Paris, and was active in the French Resistance. The German authorities, unaware of his underground activities, permitted the production of his antiauthoritarian play The Flies (1943; trans. 1946) and the publication of his major philosophic work Being and Nothingness (1943; trans. 1953). Sartre gave up teaching in 1945 and founded the political and literary magazine Les Temps Modernes, of which he became editor in chief. Sartre was active after 1947 as an independent Socialist, critical of both the USSR and the United States in the so-called cold war years. Later, he supported Soviet positions but still frequently criticized Soviet policies. Most of his writing of the 1950s deals with literary and political problems. Sartre rejected the 1964 Nobel Prize in literature, explaining that to accept such an award would compromise his integrity as a writer.

Sartre's philosophic works combine the phenomenology of the German philosopher Edmund Husserl, the metaphysics of the German philosophers Georg Wilhelm Friedrich Hegel and Martin Heidegger, and the social theory of Karl Marx into a single view called existentialism. This view, which relates philosophical theory to life, literature, psychology, and political action, stimulated so much popular interest that existentialism became a worldwide movement.

In his early philosophic work, Being and Nothingness, Sartre conceived humans as beings who create their own world by rebelling against authority and by accepting personal responsibility for their actions, unaided by society, traditional morality, or religious faith. Distinguishing between human existence and the nonhuman world, he maintained that human existence is characterized by nothingness, that is, by the capacity to negate and rebel. His theory of existential psychoanalysis asserted the inescapable responsibility of all individuals for their own decisions and made the recognition of one's absolute freedom of choice the necessary condition for authentic human existence. His plays and novels express the belief that freedom and acceptance of personal responsibility are the main values in life and that individuals must rely on their creative powers rather than on social or religious authority.

In his later philosophic work Critique of Dialectical Reason (1960; trans. 1976), Sartre's emphasis shifted from existentialist freedom and subjectivity to Marxist social determinism. Sartre argued that the influence of modern society over the individual is so great as to produce serialization, by which he meant loss of self. Individual power and freedom can only be regained through group revolutionary action. Despite this exhortation to revolutionary political activity, Sartre himself did not join the Communist Party, thus retaining the freedom to criticize the Soviet invasions of Hungary in 1956 and Czechoslovakia in 1968. He died in Paris, April 15, 1980.

The part of the theory of design or semiotics, that concerns the relationship between speakers and their signs. the study of the principles governing appropriate conversational moves is called general pragmatics, applied pragmatics treats of special kinds of linguistic interactions such as interviews and speech asking, nevertheless, the philosophical movement that has had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notion that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in know-how and practicality and an equally American distrust of abstract theories and ideologies.

Pragmatists regard all theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.

Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behavior. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.

The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatism’s refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather that these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists’ denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.

Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested to many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.

The three most important pragmatists are American philosophers Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; his objective was to infuse scientific thinking into philosophy and society, and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning - in particular, the meaning of concepts used in science. The meaning of the concept ‘brittle,’ for example, is given by the observed consequences or properties that objects called ‘brittle’ exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivists, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivists emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.

James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce’s doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic, is crucial to rationality and that the great issues of life - morality and religious belief, for example - are leaps of faith. As such, they depend upon what he called ‘the will to believe’ and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist - someone who believes the world to be far too complex for any one philosophy to explain everything.

Dewey’s philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world-view in which individuals and society are progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything people know and do depends on a historical context and is thus tentative rather than absolute.

Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey’s writings, although he aspired to synthesize the two realms.

The pragmatist tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists - Pierce, James, and Dewey - as an alternative to Rorty’s interpretation of the tradition.

In an ever changing world, pragmatism has many benefits. It defends social experimentation as a means of improving society, accepts pluralism, and rejects dead dogmas. But a philosophy that offers no final answers or absolutes and that appears vague as a result of trying to harmonize opposites may also be unsatisfactory to some.

One of the five branches into which semiotics is usually divided the study of meaning of words, and their relation of designed to the object studied, a semantic is provided for a formal language when an interpretation or model is specified. Nonetheless, the Semantics, the Greek semanticist, ‘significant,’ the study of the meaning of linguistic signs - that is, words, expressions, and sentences. Scholars of semantics try to one answer such questions as ‘What is the meaning of (the word) X?’ They do this by studying what signs are, as well as how signs possess significance - that is, how they are intended by speakers, how they designate (make reference to things and ideas), and how they are interpreted by hearers. The goal of semantics is to match the meanings of signs - what they stand for - with the process of assigning those meanings.

Semantics is studied from philosophical (pure) and linguistic (descriptive and theoretical) approaches, plus an approach known as general semantics. Philosophers look at the behavior that goes with the process of meaning. Linguists study the elements or features of meaning as they are related in a linguistic system. General semanticists concentrate on meaning as influencing what people think and do.

These semantic approaches also have broader application. Anthropologists, through descriptive semantics, study what people categorize as culturally important. Psychologists draw on theoretical semantic studies that attempt to describe the mental process of understanding and to identify how people acquire meaning (as well as sound and structure) in language. Animal behaviorists research how and what other species communicate. Exponents of general semantics examine the different values (or connotations) of signs that supposedly mean the same thing (such as ‘the victor at Jena’ and ‘the loser at Waterloo,’ both referring to Napoleon). Also in a general-semantics vein, literary critics have been influenced by studies differentiating literary language from ordinary language and describing how literary metaphors evoke feelings and attitudes.

In the late 19th century Michel Jules Alfred Breal, a French philologist, proposed a ‘science of significations’ that would investigate how sense is attached to expressions and other signs. In 1910 the British philosophers Alfred North Whitehead and Bertrand Russell published Principia Mathematica, which strongly influenced the Vienna Circle, a group of philosophers who developed the rigorous philosophical approach known as logical positivism.

One of the leading figures of the Vienna Circle, the German philosopher Rudolf Carnap, made a major contribution to philosophical semantics by developing symbolic logic, a system for analyzing signs and what they designate. In logical positivism, meaning is a relationship between words and things, and its study is empirically based: Because language, ideally, is a direct reflection of reality, signs match things and facts. In symbolic logic, however, mathematical notation is used to state what signs designate and to do so more clearly and precisely than is possible in ordinary language. Symbolic logic is thus itself a language, specifically, a metalanguage (formal technical language) used to talk about an object language (the language that is the object of a given semantic study).

An object language has a speaker (for example, a French woman) using expressions (such as la plume rouge) to designate a meaning (in this case, to indicate a definite pen - plume - of the color red - rouge). The full description of an object language in symbols is called the semiotic of that language. A language's semiotic has the following aspects: (1) a semantic aspect, in which signs (words, expressions, sentences) are given specific designations; (2) a pragmatic aspect, in which the contextual relations between speakers and signs are indicated; and (3) a syntactic aspect, in which formal relations among the elements within signs (for example, among the sounds in a sentence) are indicated.

An interpreted language in symbolic logic is an object language together with rules of meaning that link signs and designations. Each interpreted sign has a truth condition - a condition that must be met in order for the sign to be true. A sign's meaning is what the sign designates when its truth condition is satisfied. For example, the expression or sign ‘the moon is a sphere’ is understood by someone who knows English; however, although it is understood, it may or may not be true. The expression is true if the thing it is extended to - the moon - is in fact spherical. To determine the sign's truth value, one must look at the moon for oneself.

The symbolic logic of logical positivist philosophy thus represents an attempt to get at meaning by way of the empirical verifiability of signs - by whether the truth of the sign can be confirmed by observing something in the real world. This attempt at understanding meaning has been only moderately successful. The Austrian-British philosopher Ludwig Wittgenstein rejected it in favor of his ‘ordinary language’ philosophy, in which he asserted that thought is based on everyday language. Not all signs designate things in the world, he pointed out, nor can all signs be associated with truth values. In his approach to philosophical semantics, the rules of meaning are disclosed in how speech is used.

From ordinary-language philosophy has evolved the current theory of speech-act semantics. The British philosopher J. L. Austin claimed that, by speaking, a person performs an act, or does something (such as state, predict, or warn), and that meaning is found in what an expression does, in the act it performs. The American philosopher John R. Searle extended Austin's ideas, emphasizing the need to relate the functions of signs or expressions to their social context. Searle asserted that speech encompasses at least three kinds of acts: (1) locutionary acts, in which things are said with a certain sense or reference (as in ‘the moon is a sphere’); (2) illocutionary acts, in which such acts as promising or commanding are performed by means of speaking; and (3) perlocutionary acts, in which the speaker, by speaking, does something to someone else (for example, angers, consoles, or persuades someone). The speaker's intentions are conveyed by the illocutionary force that is given to the signs - that is, by the actions implicit in what is said. To be successfully meant, however, the signs must also be appropriate, sincere, consistent with the speaker's general beliefs and conduct, and recognizable as meaningful by the hearer.

What has developed in philosophical semantics, then, is a distinction between truth-based semantics and speech-act semantics. Some critics of speech-act theory believe that it deals primarily with meaning in communication (as opposed to meaning in language) and thus is part of the pragmatic aspect of a language's semiotic - that it relates to signs and to the knowledge of the world shared by speakers and hearers, rather than relating to signs and their designations (semantic aspect) or to formal relations among signs (syntactic aspect). These scholars hold that semantics should be restricted to assigning interpretations to signs alone - independent of a speaker and hearer.

Researchers in descriptive semantics examine what signs mean in particular languages. They aim, for instance, to identify what constitutes nouns or noun phrases and verbs or verb phrases. For some languages, such as English, this is done with subject-predicate analysis. For languages without clear-cut distinctions between nouns, verbs, and prepositions, it is possible to say what the signs mean by analyzing the structure of what are called propositions. In such an analysis, a sign is seen as an operator that combines with one or more arguments (also signs), often nominal arguments (noun phrases) or, relates nominal arguments to other elements in the expression (such as prepositional phrases or adverbial phrases). For example, in the expression ‘Bill gives Mary the book,’‘gives’ is an operator that relates the arguments ‘Bill,’‘Mary,’ and ‘the book.’

Whether using subject-predicate analysis or propositional analysis, descriptive semanticists establish expression classes (classes of items that can substitute for one another within a sign) and classes of items within the conventional parts of speech (such as nouns and verbs). The resulting classes are thus defined in terms of syntax, and they also have semantic roles; that is, the items in these classes perform specific grammatical functions, and in so doing they establish meaning by predicating, referring, making distinctions among entities, relations, or actions. For example, ‘kiss’ belongs to an expression class with other items such as ‘hit’ and ‘see,’ as well as to the conventional part of speech ‘verb,’ in which it is part of a subclass of operators requiring two arguments (an actor and a receiver). In ‘Mary kissed John,’ the syntactic role of ‘kiss’ is to relate two nominal arguments (‘Mary’ and ‘John’), whereas its semantic role is to identify a type of action. Unfortunately for descriptive semantics, however, it is not always possible to find a one-to-one correlation of syntactic classes with semantic roles. For instance, ‘John’ has the same semantic role - to identify a person - in the following two sentences: ‘John is easy to please’ and ‘John is eager to please.’ The syntactic role of ‘John’ in the two sentences, however, is different: In the first, ‘John’ is the receiver of an action; in the second, ‘John’ is the actor.

Linguistic semantics is also used by anthropologists called ethnoscientists to conduct formal semantic analysis (componential analysis) to determine how expressed signs - usually single words as vocabulary items called lexemes - in a language are related to the perceptions and thoughts of the people who speak the language. Componential analysis tests the idea that linguistic categories influence or determine how people view the world; this idea is called the Whorf hypothesis after the American anthropological linguist Benjamin Lee Whorf, who proposed it. In componential analysis, lexemes that have a common range of meaning constitute a semantic domain. Such a domain is characterized by the distinctive semantic features (components) that differentiate individual lexemes in the domain from one another, and also by features shared by all the lexemes in the domain. Such componential analysis points out, for example, that in the domain ‘seat’ in English, the lexemes ‘chair,’‘sofa,’‘loveseat,’ and ‘bench’ can be distinguished from one another according to how many people are accommodated and whether a back support is included. At the same time all these lexemes share the common component, or feature, of meaning ‘something on which to sit.’

Linguists pursuing such componential analysis hope to identify a universal set of such semantic features, from which are drawn the different sets of features that characterize different languages. This idea of universal semantic features has been applied to the analysis of systems of myth and kinship in various cultures by the French anthropologist Claude Lévi-Strauss. He showed that people organize their societies and interpret their place in these societies in ways that, despite apparent differences, have remarkable underlying similarities.

Linguists concerned with theoretical semantics are looking for a general theory of meaning in language. To such linguists, known as transformational-generative grammarians, meaning is part of the linguistic knowledge or competence that all humans possess. A generative grammar as a model of linguistic competence has a phonological (sound-system), a syntactic, and a semantic component. The semantic component, as part of a generative theory of meaning, is envisioned as a system of rules that govern how interpretable signs are interpreted and determine that other signs (such as ‘Colorless green ideas sleep furiously’), although grammatical expressions, are meaningless - semantically blocked. The rules must also account for how a sentence such as ‘They passed the port at midnight’ can have at least two interpretations.

Generative semantics grew out of proposals to explain a speaker's ability to produce and understand new expressions where grammar or syntax fails. Its goal is to explain why and how, for example, a person understands at first hearing that the sentence ‘Colorless green ideas sleep furiously’ has no meaning, even though it follows the rules of English grammar; or how, in hearing a sentence with two possible interpretations (such as ‘They passed the port at midnight’), one decides which meaning applies.

In generative semantics, the idea developed that all information needed to semantically interpret a sign (usually a sentence) is contained in the sentence's underlying grammatical or syntactic deep structure. The deep structure of a sentence involves lexemes (understood as words or vocabulary items composed of bundles of semantic features selected from the proposed universal set of semantic features). On the sentence's surface (that is, when it is spoken) these lexemes will appear as nouns, verbs, adjectives, and other parts of speech - that is, as vocabulary items. When the sentence is formulated by the speaker, semantic roles (such as subject, object, predicate) are assigned to the lexemes; the listener hears the spoken sentence and interprets the semantic features that are meant.

Whether deep structure and semantic interpretation are distinct from one another is a matter of controversy. Most generative linguists agree, however, that a grammar should generate the set of semantically well-formed expressions that are possible in a given language, and that the grammar should associate a semantic interpretation with each expression.

Another subject of debate is whether semantic interpretation should be understood as syntactically based (that is, coming from a sentence's deep structure); or whether it should be seen as semantically based. According to Noam Chomsky, an American scholar who is particularly influential in this field, it is possible - in a syntactically based theory - for surface structure and deep structure jointly to determine the semantic interpretation of an expression.

The focus of general semantics is how people evaluate words and how that evaluation influences their behavior. Begun by the Polish American linguist Alfred Korzybski and long associated with the American semanticist and politician S. I. Hayakawa, general semantics has been used in efforts to make people aware of dangers inherent in treating words as more than symbols. It has been extremely popular with writers who use language to influence people's ideas. In their work, these writers use general-semantics guidelines for avoiding loose generalizations, rigid attitudes, inappropriate finality, and imprecision. Some philosophers and linguists, however, have criticized general semantics as lacking scientific rigor, and the approach has declined in popularity.

Positivism, system of philosophy based on experience and empirical knowledge of natural phenomena, in which metaphysics and theology are regarded as inadequate and imperfect systems of knowledge. The doctrine was first called positivism by the 19th-century French mathematician and philosopher Auguste Comte (1798-1857), but some of the positivist concepts may be traced to the British philosopher David Hume, the French philosopher Duc de Saint-Simon, and the German philosopher Immanuel Kant.

Comte chose the word positivism on the ground that it indicated the ‘reality’ and ‘constructive tendency’ that he claimed for the theoretical aspect of the doctrine. He was, in the main, interested in a reorganization of social life for the good of humanity through scientific knowledge, and thus control of natural forces. The two primary components of positivism, the philosophy and the polity (or program of individual and social conduct), were later welded by Comte into a whole under the conception of a religion, in which humanity was the object of worship. A number of Comte's disciples refused, however, to accept this religious development of his philosophy, because it seemed to contradict the original positivist philosophy. Many of Comte's doctrines were later adapted and developed by the British social philosophers John Stuart Mill and Herbert Spencer and by the Austrian philosopher and physicist Ernst Mach.

By comparison, the moral philosopher and epistemologist Bernard Bolzano (1781-1848) argues, though, that there is something else, an infinity that doe not have this whatever you need it to be elasticity. In fact a truly infinite quantity (for example, the length of a straight ligne unbounded in either direction, meaning: The magnitude of the spatial entity containing all the points determined solely by their abstractly conceivable relation to two fixed points) does not by any means need to be variable, and in adduced example it is in fact not variable. Conversely, it is quite possible for a quantity merely capable of being taken greater than we have already taken it, and of becoming larger than any pre-assigned (finite) quantity, nevertheless it is to mean, in that of all times is merely finite, which holds in particular of every numerical quantity 1, 2, 3, 4, 5.

In other words, for Bolzano there could be a true infinity that was not a variable something that was only bigger than anything you might specify. Such a true infinity was the result of joining two pints together and extending that ligne in both directions without stopping. And what is more, he could separate off the demands of calculus, using a finite quality without ever bothering with the slippery potential infinity. Here was both a deeper understanding of the nature of infinity and the basis on which are built in his safe infinity free calculus.

This use of the inexhaustible follows on directly from most Bolzanos criticism of the way that ∞ we used as a variable something that would be bigger than anything you could specify, but never quite reached the true, absolute infinity. In Paradoxes of the Infinity Bolzano points out that is possible for a quantity merely capable of becoming larger than any one pre-assigned (finite) quantity, nevertheless to remain at all times merely finite.

Bolzano intended tis as a criticism of the way infinity was treated, but Professor Jacquette sees it instead of a way of masking use of practical applications like calculus without the need for weasel words about infinity.

By replacing ∞ with ¤ we do away with one of the most common requirements for infinity, but is there anything left that map out to the real world? Can we confine infinity to that pure mathematical other world, where anything, however unreal, can be constructed, and forget about it elsewhere? Surprisingly, this seems to have been the view, at least at one point in time, even of the German mathematician and founder of set-theory Georg Cantor (1845-1918), himself, whose comments in 1883, that only the finite numbers are real.

Keeping within the lines of reason, both the Cambridge mathematician and philosopher Frank Plumpton Ramsey (1903-30) and the Italian mathematician G. Peano (1858-1932) have been to distinguish logical paradoxes and that depend upon the notion of reference or truth (semantic notions), such are the postulates justifying mathematical induction. It ensures that a numerical series is closed, in the sense that nothing but zero and its successors can be numbers. In that any series satisfying a set of axioms can be conceived as the sequence of natural numbers. Candidates from set theory include the Zermelo numbers, where the empty set is zero, and the successor of each number is its unit set, and the von Neuman numbers, where each number is the set of all smaller numbers. A similar and equally fundamental complementarity exists in the relation between zero and infinity. Although the fullness of infinity is logically antithetical to the emptiness of zero, infinity can be obtained from zero with a simple mathematical operation. The division of many numbers by zero is infinity, while the multiplication of any number by zero is zero.

With the set theory developed by the German mathematician and logician Georg Cantor. From 1878 to 1807, Cantor created a theory of abstract sets of entities that eventually became a mathematical discipline. A set, as he defined it, is a collection of definite and distinguished objects in thought or perception conceived as a whole.

Cantor attempted to prove that the process of counting and the definition of integers could be placed on a solid mathematical foundation. His method was to repeatedly place the elements in one set into one-to-one correspondence with those in another. In the case of integers, Cantor showed that each integer (1, 2, 3, . . . n) could be paired with an even integers (2, 4, 6, . . . n), and, therefore, that the set of all integers was equal to the set of all even numbers.

Amazingly, Cantor discovered that some infinite sets were large than others and that infinite sets formed a hierarchy of greater infinities. After this failed attempt to save the classical view of logical foundations and internal consistency of mathematical systems, it soon became obvious that a major crack had appeared in the seemingly sold foundations of number and mathematics. Meanwhile, an impressive number of mathematicians began to see that everything from functional analysis to the theory of real numbers depended on the problematic character of number itself.

While, in the theory of probability Ramsey was the first to show how a personalised theory could be developed, based on precise behavioural notions of preference and expectation. In the philosophy of language, Ramsey was one of the first thinkers to accept a redundancy theory of truth, which hr combined with radical views of the function of man y kinds of propositions. Neither generalizations nor causal propositions, nor those treating probability or ethics, describe facts, but each has a different specific function in our intellectual economy.

Ramsey advocates that of a sentence generated by taking all the sentence affirmed in a scientific theory that use some term, e.g., quark. Replacing the term by a variable, and existentially quantifying into the result. Instead of saying quarks have such-and-such properties, Ramsey postdated that the sentence as saying that there is something that has those properties. If the process is repeated, the sentence gives the topic-neutral structure of the theory, but removes any implications that we know what the term so treated denote. It leaves open the possibility of identifying the theoretical item with whatever it is that best fits the description provided. Nonetheless, it was pointed out by the Cambridge mathematician Newman that if the process is carried out for all except the logical bones of the theory, then by the Löwenheim-Skolem theorem, the result will be interpretable in any domain of sufficient cardinality, and the content of the theory may reasonably be felt to have been lost.

It seems, that the most taken of paradoxes in the foundations of set theory as discovered by Russell in 1901. Some classes have themselves as members: The class of all abstract objects, for example, is an abstract object, whereby, others do not: The class of donkeys is not itself a donkey. Now consider the class of all classes that are not members of themselves, is this class a member of itself, that, if it is, then it is not, and if it is not, then it is.

The paradox is structurally similar to easier examples, such as the paradox of the barber. Such one like a village having a barber in it, who shaves all and only the people who do not have in themselves. Who shaves the barber? If he shaves himself, then he does not, but if he does not shave himself, then he does not. The paradox is actually just a proof that there is no such barber or in other words, that the condition is inconsistent. All the same, it is no to easy to say why there is no such class as the one Russell defines. It seems that there must be some restriction on the kind of definition that are allowed to define classes and the difficulty that of finding a well-motivated principle behind any such restriction.

The French mathematician and philosopher Henri Jules Poincaré (1854-1912) believed that paradoses like those of Russell nd the barber were due to such as the impredicative definitions, and therefore proposed banning them. But, it tuns out that classical mathematics required such definitions at too many points for the ban to be easily absolved. Having, in turn, as forwarded by Poincaré and Russell, was that in order to solve the logical and semantic paradoxes it would have to ban any collection (set) containing members that can only be defined by means of the collection taken as a whole. It is, effectively by all occurring principles into which have an adopting vicious regress, as to mark the definition for which involves no such failure. There is frequently room for dispute about whether regresses are benign or vicious, since the issue will hinge on whether it is necessary to reapply the procedure. The cosmological argument is an attempt to find a stopping point for what is otherwise seen as being an infinite regress, and, to ban of the predicative definitions.

The investigation of questions that arise from reflection upon sciences and scientific inquiry, are such as called of a philosophy of science. Such questions include, what distinctions in the methods of science? s there a clear demarcation between scenes and other disciplines, and how do we place such enquires as history, economics or sociology? And scientific theories probable or more in the nature of provisional conjecture? Can the be verified or falsified? What distinguished good from bad explanations? Might there be one unified since, embracing all the special science? For much of the 20th century there questions were pursued in a highly abstract and logical framework it being supposed that as general logic of scientific discovery that a general logic of scientific discovery a justification might be found. However, many now take interests in a more historical, contextual and sometimes sociological approach, in which the methods and successes of a science at a particular time are regarded less in terms of universal logical principles and procedure, and more in terms of their availability to methods and paradigms as well as the social context.

In addition, to general questions of methodology, there are specific problems within particular sciences, giving subjects as biology, mathematics and physics.

The intuitive certainty that sparks aflame the dialectic awarenesses for its immediate concerns are either of the truth or by some other in an object of apprehensions, such as a concept. Awareness as such, has to its amounting quality value the place where philosophical understanding of the source of our knowledge are, however, in covering the sensible apprehension of things and pure intuition it is that which stricture sensation into the experience of things accent of its direction that orchestrates the celestial overture into measures in space and time.

The notion that determines how something is seen or evaluated of the status of law and morality especially associated with St Thomas Aquinas and the subsequent scholastic tradition. More widely, any attempt to cement the moral and legal order together with the nature of the cosmos or how the nature of human beings, for which sense it is also found in some Protestant writers, and arguably derivative from a Platonic view of ethics, and is implicit in ancient Stoicism. Law stands above and apart from the activities of human lawmaker, it constitutes an objective set of principles that can be seen true by natural light or reason, and (in religion versions of the theory) that express Gods will for creation. Non-religious versions of the theory substitute objective conditions for human flourishing as the source of constraints upon permissible actions and social arrangements. Within the natural law tradition, different views have been held about the relationship between the rule of law about God s will, for instance the Dutch philosopher Hugo Grothius (1583-1645), similarly takes upon the view that the content of natural law is independent of any will, including that of God, while the German theorist and historian Samuel von Pufendorf (1632-94) takes the opposite view, thereby facing the problem of one horn of the Euthyphro dilemma, that simply states, that its dilemma arises from whatever the source of authority is supposed to be, for in which do we care about the general good because it is good, or do we just call good things that we care about. Wherefore, by facing the problem that may be to assume of a strong form, in which it is claimed that various facts entail values, or a weaker form, from which it confines itself to holding that reason by itself is capable of discerning moral requirements that are supped of binding to all human bings regardless of their desires

Although the morality of people send the ethical amount from which the same thing, is that there is a usage that restricts morality to systems such as that of the German philosopher and founder of ethical philosophy Immanuel Kant (1724-1804), based on notions such as duty, obligation, and principles of conduct, reserving ethics for more than the Aristotelian approach to practical reasoning based on the notion of a virtue, and generally avoiding the separation of moral considerations from other practical considerations. The scholarly issues are complex, with some writers seeing Kant as more Aristotelian and Aristotle as, ore involved in a separate sphere of responsibility and duty, than the simple contrast suggests. Some theorists see the subject in terms of a number of laws (as in the Ten Commandments). The status of these laws may be test they are the edicts of a divine lawmaker, or that they are truths of reason, knowable deductively. Other approaches to ethics (e.g., eudaimonism, situation ethics, virtue ethics) eschew general principles as much as possible, frequently disguising the great complexity of practical reasoning. For Kantian notion of the moral law is a binding requirement of the categorical imperative, and to understand whether they are equivalent at some deep level. Kants own applications of the notion are not always convincing, as for one cause of confusion in relating Kants ethics to theories such additional expressivism is that it is easy, but mistaken, to suppose that the categorical nature of the imperative means that it cannot be the expression of sentiment, but must derive from something unconditional or necessary such as the voice of reason.

For which ever reason, the mortal being makes of its presence to the future of weighing of that which one must do, or that which can be required of one. The term carries implications of that which is owed (due) to other people, or perhaps in onself. Universal duties would be owed to persons (or sentient beings) as such, whereas special duty in virtue of specific relations, such as being the child of someone, or having made someone a promise. Duty or obligation is the primary concept of deontological approaches to ethics, but is constructed in other systems out of other notions. In the system of Kant, a perfect duty is one that must be performed whatever the circumstances: Imperfect duties may have to give way to the more stringent ones. In another way, perfect duties are those that are correlative with the right to others, imperfect duties are not. Problems with the concept include the ways in which due needs to be specified (a frequent criticism of Kant is that his notion of duty is too abstract). The concept may also suggest of a regimented view of ethical life in which we are all forced conscripts in a kind of moral army, and may encourage an individualistic and antagonistic view of social relations.

The most generally accepted account of externalism and/or internalism, that this distinction is that a theory of justification is internalist if only if it requiem that all of the factors needed for a belief to be epistemologically justified for a given person be cognitively accessible to that person, internal to cognitive perceptivity, and externalist, if it allows that at least some of the justifying factors need not be thus accessible, so that thy can be external to the believers cognitive perceptive, beyond any such given relations. However, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering any very explicit explication.

The externalist/internalist distinction has been mainly applied to theories of epistemic justification: It has also been applied in a closely related way to accounts of knowledge and in a rather different way to accounts of belief and thought contents.

The internalist requirement of cognitive accessibility can be interpreted in at least two ways: A strong version of internalism would require that the believe actually be aware of the justifying factor in order to be justified: While a weaker version would require only that he be capable of becoming aware of them by focussing his attentions appropriately, but without the need for any change of position, new information, etc. Though the phrase cognitively accessible suggests the weak interpretation, the main intuitive motivation for internalism, viz the idea that epistemic justification requires that the believe actually have in his cognitive possession a reason for thinking that the belief is true, and would require the strong interpretation.

Perhaps, the clearest example of an internalist position would be a Foundationalist view according to which foundational beliefs pertain to immediately experienced states of mind and other beliefs are justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Such a view could count as either a strong or a weak version of internalism, depending on whether actual awareness of the justifying elements or only the capacity to become aware of them is required. Similarly, a coherent view could also be internalist, if both the beliefs or other states with which a justification belief is required to cohere and the coherence relations themselves are reflectively accessible.

It should be carefully noticed that when internalism is construed in this way, it is neither necessary nor sufficient by itself for internalism that the justifying factors literally be internal mental states of the person in question. Not necessary, necessary, because on at least some views, e.g., a direct realist view of perception, something other than a mental state of the believe can be cognitively accessible: Not sufficient, because there are views according to which at least some mental states need not be actual (strong version) or even possible (weak version) objects of cognitive awareness. Also, on this way of drawing the distinction, a hybrid view, according to which some of the factors required for justification must be cognitively accessible while others need not and in general will not be, would count as an externalist view. Obviously too, a view that was externalist in relation to a strong version of internalism (by not requiring that the believe actually be aware of all justifiable factors) could still be internalist in relation to a weak version (by requiring that he at least be capable of becoming aware of them).

The most prominent recent externalist views have been versions of reliabilism, whose requirements for justification is roughly that the belief be produced in a way or via a process that makes of objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access to the relations of reliability in question. Lacking such access, such a person will in general have no reason for thinking that the belief is true or likely to be true , but will, on such an account, nonetheless be epistemically justified in according it. Thus such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.

The main objection to externalism rests on the intuitive certainty that the basic requirement for epistemic justification is that the acceptance of the belief in question be rational or responsible in relation to the cognitive goal of truth, which seems to require in turn that the believe actually be dialectally aware of a reason for thinking that the belief is true (or, at the very least, that such a reason be available to him). Since the satisfaction of an externalist condition is neither necessary nor sufficient for the existence of such a cognitively accessible reason, it is argued, externalism is mistaken as an account of epistemic justification. This general point has been elaborated by appeal to two sorts of putative intuitive counter-examples to externalism. The first of these challenges the necessity of belief which seem intuitively to be justified, but for which the externalist conditions are not satisfied. The standard examples in this sort are cases where beliefs are produced in some very nonstandard way, e.g., by a Cartesian demon, but nonetheless, in such a way that the subjective experience of the believe is indistinguishable from that of someone whose beliefs are produced more normally. The intuitive claim is that the believe in such a case is nonetheless epistemically justified, as much so as one whose belief is produced in a more normal way, and hence that externalist account of justification must be mistaken.

Perhaps the most striking reply to this sort of counter-example, on behalf of a cognitive process is to be assessed in normal possible worlds, i.e., in possible worlds that are actually the way our world is common-seismically believed to be, than in the world which contains the belief being judged. Since the cognitive processes employed in the Cartesian demon cases are, for which we may assume, reliable when assessed in this way, the reliability can agree that such beliefs are justified. The obvious, to a considerable degree of bringing out the issue of whether it is or not an adequate rationale for this construal of Reliabilism, so that the reply is not merely a notional presupposition guised as having representation.

The correlative way of elaborating on the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. In this context, the most widely discussed examples have to do with possible occult cognitive capacities, like clairvoyance. Considering the point in application once, again, to Reliabilism, the claim is that to think that he has such a cognitive power, and, perhaps, even good reasons to the contrary, is not rational or responsible and therefore not epistemically justified in accepting the belief that result from his clairvoyance, despite the fact that the Reliabilist condition is satisfied.

One sort of response to this latter sorts of objection is to bite the bullet and insist that such believers are in fact justified, dismissing the seeming intuitions to the contrary as latent internalist prejudice. A more widely adopted response attempts to impose additional conditions, usually of a roughly internalized sort, which will rule out the offending example, while stopping far of a full internalism. But, while there is little doubt that such modified versions of externalism can handle particular cases, as well enough to avoid clear intuitive implausibility, the usually problematic cases that they cannot handle, and also whether there is and clear motivation for the additional requirements other than the general internalist view of justification that externalist are committed to reject.

A view in this same general vein, one that might be described as a hybrid of internalism and externalism holds that epistemic justification requires that there is a justificatory factor that is cognitively accessible to the believe in question (though it need not be actually grasped), thus ruling out, e.g., a pure Reliabilism. At the same time, however, though it must be objectively true that beliefs for which such a factor is available are likely to be true, in addition, the fact need not be in any way grasped or cognitively accessible to the believe. In effect, of the premises needed to argue that a particular belief is likely to be true, one must be accessible in a way that would satisfy at least weakly internalised. The internalist will respond that this hybrid view is of no help at all in meeting the objection and has no belief nor is it held in the rational, responsible way that justification intuitively seems to require, for the believe in question, lacking one crucial premise, still has no reason at all for thinking that his belief is likely to be true.

An alternative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., a result of a reliable process (and perhaps, further conditions as well). This makes it possible for such a view to retain internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.

Such an externalist account of knowledge can accommodate the commonsense conviction that animals, young children, and unsophisticated adults posses knowledge, though not the weaker conviction (if such a conviction does exists) that such individuals are epistemically justified in their beliefs. It is also at least less vulnerable to internalist counter-examples of the sort discussed, since the intuitions involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge is supposed to have. In particular, does it have any serious bearing on traditional epistemological problems and on the deepest and most troubling versions of scepticism, which seems in fact to be primarily concerned with justification, the an knowledge?`

A rather different use of the terms internalism and externalism has to do with the issue of how the content of beliefs and thoughts is determined: According to an internalist view of content, the content of such intention states depends only on the non-relational, internal properties of the individuals mind or grain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors and suggests a view that appears of both internal and external elements are standardly classified as an external view.

As with justification and knowledge, the traditional view of content has been strongly internalized in character. The main argument for externalism derives from the philosophy of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as direct reference theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criteria employed by expects in his social group, etc. - not just on what is going on internally in his mind or brain.

An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought from the inside, simply by reflection. If content is depend on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalized account of mental content would seem to support an externalist account of justification, by way that if part or all of the content of a belief inaccessible to the believe, then both the justifying status of other beliefs in relation to that content and the status of that content justifying the beliefs will be similarly inaccessible, thus contravening the internalist requirement for justification. An internalist must insist that there are no justifiable relations of these sorts, that our internally associable content can either be justified or justly anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.

A great deal of philosophical effort has been lavished on the attempt to naturalize content, i.e. to explain in non-semantic,. Non-intentional terms what it is for something to be represental (have content) and what it is for something to have some particular content rather than some other. There appear to be only four types of theory that have been proposed: Theories that ground representation in (1) similarity, (2) conversance, (3) functional role, (4) teleology.

Similarly, theories hold that 'r' represents 'x' in virtue of being similar to 'x'. This has seemed hopeless to most as a theory of mental representation because it appears to require that things in the brain must share properties with the things they represent: To represent a cat as furry appears to require something furry in the brain. Perhaps, a notion of similarity that is naturalistic and does not involve property sharing can be worked out, but it is not obvious how.

Finally, that while the formalism of quantum physics predicts that correlations between particles over space-like inseparability, of which are possible, it can say nothing about what this strange new relationship between parts (quanta) and the whole (cosmos) cause to result outside this formalism. This does not, however, prevent us from considering the implications in philosophical terms. As the philosopher of science Errol Harris noted in thinking about the special character of wholeness in modern physics, a unity without internal content is a blank or empty set and is not recognizable as a whole. A collection of merely externally related parts does not constitute a whole in that the parts will not be “mutually adaptive and complementary to one-another.”

Wholeness requires a complementary relationship between unity and difference and is governed by a principle of organization determining the interrelationship between parts. This organizing principle must be universal to a genuine whole and implicit in all parts constituting the whole, even the whole is exemplified only in its parts. This principle of order, Harris continued, “is nothing really in and of itself. It is the way he parts are organized, and another constituent additional to those that constitute the totality.”

In a genuine whole, the relationship between the constituent parts must be “internal or immanent” in the parts, as opposed to a more spurious whole in which parts appear to disclose wholeness dur to relationships that are external to the arts. The collection of parts that would allegedly constitute the whole in classical physics is an example of a spurious whole. Parts continue a genuine whole when the universal principle of order is inside the parts and hereby adjusts each to all so that they interlock and become mutually complementary. This not only describes the character of the whole revealed in both relativity theory and quantum mechanics. It is also consistent with the manner in which we have begun to understand the relations between parts and whole in modern biology.

Modern physics also reveals, claimed Harris, complementary relationship between the differences between parts that constitute and the universal ordering principle that are immanent in each part. While the whole cannot be finally disclosed in the analysis of the parts, the study of the differences between parts provides insights into the dynamic structure of the whole present in each part. The part can never, however, be finally isolated from the web of relationships that discloses the interconnections with the whole, and any attempt to do so results in ambiguity.

Much of the ambiguity in attempts to explain the character of wholes in both physics and biology derives from the assumption that order exists between or outside parts. Yet order in complementary relationships between difference and sameness in any physical event is never external to that event, and the cognations are immanent in the event. From this perspective, the addition of non-locality to this picture of the distributive constitution in dynamic function of wholeness is not surprising. The relationships between part, as quantum event apparent in observation or measurement, and the undissectable whole, calculate on in but are not described by the instantaneous correlations between measurements in space-like separate regions, is another extension of the part-whole complementarity in modern physics.

If the universe is a seamlessly interactive system that evolves to higher levels of complex and complicating regularities of which ae lawfully emergent in property of systems, we can assume that the cosmos is a single significant whole that evinces progressive order in complementary relations to its parts. Given that this whole exists in some sense within all parts (quanta), one can then argue that in operates in self-reflective fashion and is the ground from all emergent plexuities. Since human consciousness evinces self-reflective awareness in te human brain (well protected between the cranium walls) and since this brain, like all physical phenomena, can be viewed as an emergent property of the whole, it is unreasonable to conclude, in philosophical terms at least, that the universe is conscious.

Nevertheless, since the actual character of this seamless whole cannot be represented or reduced to its parts, it lies, quite laterally, beyond all human representation or descriptions. If one chooses to believe that the universe be a self-reflective and self-organizing whole, this lends no support whatsoever to conceptual representation of design, meaning, purpose, intent, or plan associated with mytho-religious or cultural heritage. However, if one does not accept this view of the universe, there is noting in the scientific description of nature that can be used to refute this position. On the other hand, it is no longer possible to argue that a profound sense of unity with the whole, which has long been understood as foundation of religious experiences, but can be dismissed, undermined, or invalidated with appeals to scientific knowledge.

While we have consistently tried to distinguish between scientific knowledge and philosophical speculation based on this of what is obtainable, let us be quite clear on one point - there is no empirically valid causal linkage between the former and the latter. Those who wish to dismiss the speculative base on which is obviously free to do as done. However, there is another conclusion to be drawn, in that is firmly grounded in scientific theory and experiment there is no basis in the scientific descriptions of nature for believing in the radical Cartesian division between mind and world sanctioned by classical physics. Clearly, his radical separation between mind and world was a macro-level illusion fostered by limited awareness of the actual character of physical reality nd by mathematical idealizations extended beyond the realms of their applicability.

Nevertheless, the philosophical implications might prove in themselves as a criterial motive in debative consideration to how our proposed new understanding of the relationship between parts and wholes in physical reality might affect the manner in which we deal with some major real-world problems. This will issue to demonstrate why a timely resolution of these problems is critically dependent on a renewed dialogue between members of the cultures of human-social scientists and scientist-engineers. We will also argue that the resolution of these problems could be dependent on a renewed dialogue between science and religion.

As many scholars have demonstrated, the classical paradigm in physics has greatly influenced and conditioned our understanding and management of human systems in economic and political realities. Virtually all models of these realities treat human systems as if they consist of atomized units or parts that interact with one another in terms of laws or forces external to or between the parts. These systems are also viewed as hermetic or closed and, thus, its discreteness, separateness and distinction.

Consider, for example, how the classical paradigm influenced or thinking about economic reality. In the eighteenth and nineteenth centuries, the founders of classical economics -figures like Adam Smith, David Ricardo, and Thomas Malthus conceived of the economy as a closed system in which intersections between parts (consumer, produces, distributors, etc.) are controlled by forces external to the parts (supply and demand). The central legitimating principle of free market economics, formulated by Adam Smith, is that lawful or law-like forces external to the individual units function as an invisible hand. This invisible hand, said Smith, frees the units to pursue their best interests, moves the economy forward, and in general legislates the behaviour of parts in the best vantages of the whole. (The resemblance between the invisible hand and Newton’s universal law of gravity and between the relations of parts and wholes in classical economics and classical physics should be transparent.)

After roughly 1830, economists shifted the focus to the properties of the invisible hand in the interactions between parts using mathematical models. Within these models, the behaviour of parts in the economy is assumed to be analogous to the awful interactions between pats in classical mechanics. It is, therefore, not surprising that differential calculus was employed to represent economic change in a virtual world in terms of small or marginal shifts in consumption or production. The assumption was that the mathematical description of marginal shifts in the complex web of exchanges between parts (atomized units and quantities) and whole (closed economy) could reveal the lawful, or law-like, machinations of the closed economic system.

These models later became one of the fundamentals for microeconomics. Microeconomics seek to describe interactions between parts in exact quantifiable measures - such as marginal cost, marginal revenue, marginal utility, and growth of total revenue as indexed against individual units of output. In analogy with classical mechanics, the quantities are viewed as initial conditions that can serve to explain subsequent interactions between parts in the closed system in something like deterministic terms. The combination of classical macro-analysis with micro-analysis resulted in what Thorstein Veblen in 1900 termed neoclassical economics - the model for understanding economic reality that is widely used today.

Beginning in the 1939s, the challenge became to subsume the understanding of the interactions between parts in closed economic systems with more sophisticated mathematical models using devices like linear programming, game theory, and new statistical techniques. In spite of the growing mathematical sophistication, these models are based on the same assumptions from classical physics featured in previous neoclassical economic theory - with one exception. They also appeal to the assumption that systems exist in equilibrium or in perturbations from equilibria, and they seek to describe the state of the closed economic system in these terms.

One could argue that the fact that our economic models are assumptions from classical mechanics is not a problem by appealing to the two-domain distinction between micro-level macro-level processes expatiated upon earlier. Since classical mechanic serves us well in our dealings with macro-level phenomena in situations where the speed of light is so large and the quantum of action is so small as to be safely ignored for practical purposes, economic theories based on assumptions from classical mechanics should serve us well in dealing with the macro-level behaviour of economic systems.

The obvious problem, . . . acceded peripherally, . . . nature is relucent to operate in accordance with these assumptions, in that the biosphere, the interaction between parts be intimately related to the whole, no collection of arts is isolated from the whole, and the ability of the whole to regulate the relative abundance of atmospheric gases suggests that the whole of the biota appear to display emergent properties that are more than the sum of its parts. What the current ecological crisis reveal in the abstract virtual world of neoclassical economic theory. The real economies are all human activities associated with the production, distribution, and exchange of tangible goods and commodities and the consumption and use of natural resources, such as arable land and water. Although expanding economic systems in the real economy are obviously embedded in a web of relationships with the entire biosphere, our measure of healthy economic systems disguises this fact very nicely. Consider, for example, the healthy economic system written in 1996 by Frederick Hu, head of the competitive research team for the World Economic Forum - short of military conquest, economic growth is the only viable means for a country to sustain increases in natural living standards . . . An economy is internationally competitive if it performs strongly in three general areas: Abundant productive ideas from capital, labour, infrastructure and technology, optimal economic policies such as low taxes, little interference, free trade and sound market institutions. Such as the rule of law and protection of property rights.

The prescription for medium-term growth of economies in countries like Russia, Brazil, and China may seem utterly pragmatic and quite sound. But the virtual economy described is a closed and hermetically sealed system in which the invisible hand of economic forces allegedly results in a health growth economy if impediments to its operation are removed or minimized. It is, of course, often trued that such prescriptions can have the desired results in terms of increases in living standards, and Russia, Brazil and China are seeking to implement them in various ways.

In the real economy, however, these systems are clearly not closed or hermetically sealed: Russia uses carbon-based fuels in production facilities that produce large amounts of carbon dioxide and other gases that contribute to global warming: Brazil is in the process of destroying a rain forest that is critical to species diversity and the maintenance of a relative abundance of atmospheric gases that regulate Earth temperature, and China is seeking to build a first-world economy based on highly polluting old-world industrial plants that burn soft coal. Not to forget, . . . the virtual economic systems that the world now seems to regard as the best example of the benefits that can be derived form the workings of the invisible hand, that of the United States, operates in the real economy as one of the primary contributors to the ecological crisis.

In “Consilience,” Edward O. Wilson makes to comment, the case that effective and timely solutions to the problem threatening human survival is critically dependent on something like a global revolution in ethical thought and behaviour. But his view of the basis for this revolution is quite different from our own. Wilson claimed that since the foundations for moral reasoning evolved in what he termed ‘gene-culture’ evolution, the rules of ethical behaviour re emergent aspects of our genetic inheritance. Based on the assumptions that the behaviour of contemporary hunter-gatherers resembles that of our hunter-gatherers forebears in the Palaeolithic Era, he drew on accounts of Bushman hunter-gatherers living in the centre Kalahari in an effort to demonstrate that ethical behaviour is associated with instincts like bonding, cooperation, and altruism.

Wilson argued that these instincts evolved in our hunter-gatherer accessorial descendabilities, whereby genetic mutation and the ethical behaviour associated with these genetically based instincts provided a survival advantage. He then claimed that since these genes were passed on to subsequent generations of our descendable characteristics, which eventually became pervasive in the human genome, the ethical dimension of human nature has a genetic foundation. When we fully understand the “innate epigenetic rules of moral reasoning,” it seems probable that the rules will probably turn out to be an ensemble of many algorithms whose interlocking activities guide the mind across a landscape of nuances moods and choices.

Any reasonable attempt to lay a firm foundation beneath the quagmire of human ethics in all of its myriad and often contradictory formulations is admirable, and Wilson’s attempt is more admirable than most. In our view, however, there is little or no prospect that it will prove successful for a number of reasons. Wile te probability for us to discover some linkage between genes and behaviour, seems that the lightened path of human ethical behaviour and ranging advantages of this behaviour is far too complex, not o mention, inconsistently been reduced to a given set classification of “epigenetic ruled of moral reasoning.”

Also, moral codes and recoding may derive in part from instincts that confer a survival advantage, but when we are the examine to these codes, it also seems clear that they are primarily cultural products. This explains why ethical systems are constructed in a bewildering variety of ways in different cultural contexts and why they often sanction or legitimate quite different thoughts and behaviours. Let us not forget that rules f ethical behaviours are quite malleable and have been used sacredly to legitimate human activities such as slavery, colonial conquest, genocide and terrorism. As Cardinal Newman cryptically put it, “Oh how we hate one another for the love of God.”

According to Wilson, the “human mind evolved to believe in the gods” and people “need a sacred narrative” to his view are merely human constructs and, therefore, there is no basis for dialogue between the world views of science and religion. “Science for its part, will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral and religiously sentient. The result of the competition between the two world views, is believed, as In, will be the secularization of the human epic and of religion itself.

Wilson obviously has a right to his opinions, and many will agree with him for their own good reasons, but what is most interesting about his thoughtful attempted is to posit a more universal basis for human ethics in that it s based on classical assumptions about the character of both physical and biological realities. While Wilson does not argue that human’s behaviour is genetically determined in the strict sense, however, he does allege that there is a causal linkage between genes and behaviour that largely condition this behaviour, he appears to be a firm believer in classical assumption that reductionism can uncover the lawful essences that principally govern the physical aspects that were attributed to reality, including those associated with the alleged “epigenetic rules of moral reasoning.”

Once, again, Wilson’s view is apparently nothing that cannot be reduced to scientific understandings or fully disclosed in scientific terms, and this apparency of hope for the future of humanity is that the triumph of scientific thought and method will allow us to achieve the Enlightenments ideal of disclosing the lawful regularities that govern or regulate all aspects of human experience. Hence, science will uncover the “bedrock of moral and religious sentiment, and the entire human epic will be mapped in the secular space of scientific formalism.” The intent is not to denigrate Wilson’s attentive efforts to posit a more universal basis for the human condition, but is to demonstrate that any attempt to understand or improve upon the behaviour based on appeals to outmoded classical assumptions is unrealistic and outmoded. If the human mind did, in fact, evolve in something like deterministic fashion in gene-culture evolution - and if there were, in fact, innate mechanisms in mind that are both lawful and benevolent. Wilson’s program for uncovering these mechanisms could have merit. But for all the reasons that have been posited, classical determinism cannot explain the human condition and its evolutionary principle that govern in their functional dynamics, as Darwinian evolution should be modified to acclimatize the complementary relationships between cultural and biological principles that governing evaluations do indeed have in them a strong, and firm grip upon genetical mutations that have attributively been the distribution in the contribution of human interactions with themselves in the finding to self-realizations and undivided wholeness.

Freud’s use of the word “superman” or “overman”in and of itself might indicate only a superficial familiarity with a popular term associated with Nietzsche. However, as Holmes has pointed out, Freud is discussing the holy, or saintly , and its relation to repression and the giving up of freedom of instinctual expression, central concerns of the third essay of on the Genealogy of Morals, ‘What is the Meaning of Ascetic Ideals.’

Nietzsche writes of the anti-nature of the ascetic ideal, how it relates to a disgust with oneself, its continuing destructive effect upon the health of Europeans, and how it relates to the realm of ‘subterranean revenge’ and ressentiment. In addition, Nietzsche writes of the repression of instincts (though not specifically on impulses toward sexual perversions) and of their being turned inward against the self. Continuing, he wrote on the ‘instinct for freedom forcibly made latent . . . this instinct for freedom pushed back and repressed. In closing, and even more of the animal, and more still of the material: Zarathustra also speaks of most sacred, now he must find allusion caprice, even in the most sacred, that freedom from his love may become his prey. The formulation as it pertains to sexual perversions and incest certainly does not derive from Nietzsche (although, along different lines incest was an important factor in Nietzsche’s understanding of Oedipus), the relating freedom was very possibly influenced by Nietzsche, particularly in light of Freud’s reference as the ‘holy’; as well as to the ‘overman’. As these of issues re explored in the Antichrist which had been published just two years earlier.

Nietzsche had written of sublimation, and he specifically wrote of sublimation of sexual drives in the Genealogy. Freud’s use of the term as differing somewhat from his later and more Nietzschean usage such as in Three Essays on the Theory of Sexuality, but as Kaufmann notes, while ‘the word is older than either Freud or Nietzsche . . . it was Nietzsche who first gave it the specific connotation it has today’. Kaufmann regards the concept of sublimation as the most important concepts in Nietzsche’s entire philosophy.

Of course it is difficult to determine whether or not Freud may have been recently reading Nietzsche or was consciously or unconsciously drawing on information he had come across some years earlier. It is also possible that Freud had recently of some time earlier, registered a limited resource of the Genealogy or other works. At a later time in his life Freud claimed he could not read more than a few passage s of Nietzsche due to being overwhelmed by the wealth of ideas. This claim might be supported by the fact that Freud demonstrates only a limited understanding of certain of Nietzsche’s concepts. For example, his reference to the ‘overman’, such in showing a lack of understanding of the self-overcoming and sublimation, not simply freely gratified primitive instincts. Later in life, Freud demonstrates a similar misunderstanding in his equation the overman with the tyrannical father of the primal horde. Perhaps Freud confused the overman with he ‘master’ whose morality is contrasted with that of ‘slave ‘ morality in the Genealogy and Beyond Good and Evil. The conquering master more freely gratifies instinct and affirms himself, his world and has values as good. The conquered slave, unable to express himself freely, creates negating, resentful, vengeful morality glorifying his own crippled. Alienated condition, and her crates a division not between goof (noble) and bad (Contemptible), but between good (undangerous) and evil (wicked and powerful - dangerous ness).

Much of what Rycroft writes is similar to, implicit in, or at least compatible with what we have seen of Nietzsche’s theoretical addresses as to say, as other materia that has been placed on the table fr consideration. Rycroft specifically states that h takes up ‘a position much nearer Groddeck’s [on the nature of the, “it” or, id] than Freud’s. He doesn’t mention that Freud was ware of Groddeck’s concept of the “it” and understood the term to be derived from Nietzsche. However, beyond ‘the process itself; as a consequence of grammatical habit - that the activity, ‘thinking’, requires an agent.

The self, as in its manifesting in constructing dreams, ma y be an aspect of our psychic live tat knows things that our waking “In” or ego may not know and may not wish to know, and a relationship ma y be developed between these aspects of our psychic lives in which the latter opens itself creatively to the communications of he former. Zarathustra states: ‘Behind your thoughts and feelings, my brother, there stands a mighty ruler, an unknown sage - whose name is self. In your body he dwells, he is your body’. Nonetheless, Nietzsche’s self cannot be understood as a replacement for an all-knowing God to whom the “I” or ego appeals for its wisdom, commandments, guidance and the like. To open oneself to another aspect of oneself that is wiser (an unknown sage) in the sense that new information can be derived from it, does not necessarily entail that this ‘wiser’ component of one’s psychic life has God-like knowledge and commandments which if one (one’s “I-nesses”) deciphers and opens correctly to will set one on the straight path. It is true though that when Nietzsche writes of the self as ‘a mighty ruler an unknown sage ‘ he does open himself to such an interpretation and even to the possibility that this ‘ruler’ is unreachable, unapproachable for the “I.” (Nietzsche/Zarathustra redeeming the body) and after “On the Despisers of he Body, makes it clear, that there are aspects of our psychic selves that interpret the body, that mediate its directives, ideally in ways that do not deny the body but aid in the body doing ‘what it would do above all else, to create beyond itself’.

Also the idea of a fully formed, even if the unconscious, ‘mighty ruler’ and ‘unknown sage ‘ as a true self beneath an only apparent surface is at odds with Nietzsche ‘s idea that there is no one true, stable, enduring self in and of itself, to be found once of the veil in appearance is removed. And even early in his career Nietzsche wrote sarcastically of ‘that cleverly discovered well of inspiration, the unconscious’. There is, though, a tension in Nietzsche between the notion of bodily-based drive is pressing for discharge (which can, among other things, (sublimated) and a more organized bodily-based self which may be ‘an unknown sage’ and in relation to which the “I-ness” may open to potential communications in the manner for which there is no such conception of self for which Freud and the dream is not produced with the intention of being understood.

Nietzsche explored the ideas of psychic energy and drives pressing for discharge. His discussion on sublimation typically implies an understanding of drives in just such a sense as does his idea that dreams provide for discharge of drives. Nonetheless, he did not relegate all that is derived from instinct and the body to this realm. While for Nietzsche there is no stable, enduring true self awaiting discovery and liberation, the body and the self (in the broadest sense of the term, including what is unconscious and may be at work in dreams as Rycroft describes it) may offer up potential communication and direct to the “I” or ego. However, at times Nietzsche describes the “I” or ego as having very little, if any, idea as to how it is being by the “it.”

Nietzsche, like Freud, describe of two types of mental possesses, on which ‘binds’ [man’s] life to reason its concepts, such of an order as not to be swept away by the current and to lose himself, the other, pertaining to the worlds of myth, art and the dream, ‘constantly showing the desire to shape the existing world of the wide-wake person to be variegatedly irregular and disinterested, incoherent, exciting and eternally new, as is the world of dreams’. Art may function as a ’middle sphere’ and ‘middle faculty’ (transitional sphere and faculty) between a more primitive ‘metaphor-world’ of impressions and the forms of uniform abstract concepts.

Again, Nietzsche, like Freud attempts to account for the function of consciousness in light of the new under stranding of conscious mental functioning. Nietzsche distinguishes between himself and ‘older philosophers’ who do not appreciate the significance of unconscious mental functioning, while Freud distinguishes the unconscious of philosophers and the unconscious of psychoanalysis. What is missing is the acknowledgement of Nietzsche as philosopher and psychologist whose idea as on unconscious mental functioning have very strong affinities with psychoanalysis, as Freud himself will mention on a number of other occasions. Neither here nor in his letters to Fliess which he mentions Lipps, nor in his later paper in which Lipp (the ‘German philosopher’) is acknowledged again, is Nietzsche mentioned when it comes to acknowledging in a specific and detailed manner as important forerunner of psychoanalysis. Although Freud will state on a number of occasions that Nietzsche’s insight are close to psychoanalysis, very rarely will he state any details regarding the similarities. He mentions a friend calling his attention to the notion of the criminal from a sense of guilt, a patient calling his attention to the pride-memory aphorism, Nietzsche’s idea in dreams we cannot enter the realm of the psyche of primitive man, etc. there is never any derailed statement on just what Nietzsche anticipated pertinently to psychoanalysis. This is so even after Freud has been taking Nietzsche with him on vacation.

Equally important, the classical assumption that the only privileged or valid knowledge is scientific is one of the primary sources of the stark division between the two cultures of humanistic and scientists-engineers, in this view, Wilson is quite correct in assuming that a timely end to the two culture war and a renewer dialogue between members of those cultures is now critically important to human survival. It is also clear, however, those dreams of reason based on the classical paradigm will only serve to perpetuate the two-culture war. Since these dreams are also remnants of an old scientific world-view that no longer applies in theory in fact, to the actual character of physical reality, as reality is a probable service to frustrate the solution for which in found of a real world problem.

However, there is a renewed basis for dialogue between the two cultures, it is believed as quite different from that described by Wilson. Since classical epistemology has been displaced, or is the process of being displaced, by the new epistemology of science, the truths of science can no longer be viewed as transcendent ad absolute in the classical sense. The universe more closely resembles a giant organism than a giant machine, and it also displays emergent properties that serve to perpetuate the existence of the whole in both physics and biology that cannot be explained in terms of unrestricted determinism, simple causality, first causes, linear movements and initial conditions. Perhaps the first and most important precondition for renewed dialogue between the two cultural conflicting realizations as Einstein explicated upon its topic as, that a human being is a “part of the whole.’ It is this spared awareness that allows for the freedom, or existential choice of self-decision of determining our free-will and the power to differentiate direct parts to free ourselves of the “optical allusion”of our present conception of self as a ‘partially limited in space and time’ and to widen ‘our circle of compassion to embrace al living creatures and the whole of nature in its beauty’. Yet, one cannot, of course, merely reason oneself into an acceptance of this view, nonetheless, the inherent perceptions of the world are reason that the capacity for what Einstein termed ‘cosmic religious feelings’. Perhaps, our enabling capability for that which is within us to have the obtainable ability to enabling of our experience of self-realization, that of its realness is to sense its proven existence of a sense of elementarily leaving to some sorted conquering sense of universal consciousness, in so given to arise the existence of the universe, which really makes an essential difference to the existence or its penetrative spark of awakening indebtednesses of reciprocality?

Those who have this capacity will hopefully be able to communicate their enhanced scientific understanding of the relations among all aspects, and in part that is our self and the whole that are the universe in ordinary language wit enormous emotional appeal. The task lies before the poets of this renewing reality have nicely been described by Jonas Salk, which “man has come to the threshold of a state of consciousness, regarding his nature and his relationship to the Cosmos, in terms that reflects ‘reality’. By using the processes of Nature and metaphor, to describe the forces by which it operates upon and within Man, we come as close to describing reality as we can within the limits of our comprehension. Men will be very uneven in their capacity or such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons it will always be necessary to use metaphorical and mythical provisions as comprehensive guides to living. In this way. Man’s afforded efforts by the imagination and intellect can be playing the vital roles embarking upon the survival and his endurable evolution.

It is time, if not, only, to be concluded from evidence in its suggestive conditional relation, for which the religious imagination and the religious experience to engage upon the complementarity of truths science, as fitting that silence with meaning, as having to antiquate a continual emphasis, least of mention, that does not mean that those who do not believe in the existence of God or Being, should refrain in any sense from assessing the impletions of the new truths of science. Understanding these implications does not necessitate any ontology, and is in no way diminished by the lack of any ontology. And one is free to recognize a basis for a dialogue between science and religion for the same reason that one is free to deny that this basis exists - there is nothing in our current scientific world view that can prove the existence of God or Being and nothing that legitimate any anthropomorphic conceptions of the nature of God or Being.

The present time is clearly a time of a major paradigm shift, but consider the last great paradigm shift, the one that resulted in the Newtonian framework. This previous paradigm shift was profoundly problematic for the human spirit, it led to the conviction that we are strangers, freaks of nature, conscious beings in a universe that is almost entirely unconscious, and that, since the universe its strictly deterministic, even the free will we feel in regard to the movements of our bodies is an allusion. Yet it was probably necessary for the Western mind to go through the acceptance of such a paradigm.

In the final analysis there will be philosophers unprepared to accept that, if a given cognitive capacity is psychologically real, then there must be an explanation of how it is possible for an individual in the course of human development to acquire that cognitive capacity, or anything like it, can have a role to play in philosophical accounts of concepts and conceptual abilities. The most obvious basis for such a view would be a Frégean distrust of “psychology” that leads to a rigid division of labour between philosophy and psychology. The operative thought is that the task of a philosophical theory of concepts is to explain what a given concept is or what a given conceptual ability consist in. This, it is frequently maintained, is something that can be done in complete independence of explaining how such a concept or ability might be acquired. The underlying distinction is one between philosophical questions centring around concept possession and psychological questions centring around concept possibilities for an individual to acquire that ability, then it cannot be psychologically real. Nevertheless, this distinction is, however, strictly one does adhere to the distinction, it provides no support for a rejection of any given cognitive capacity for which is psychologically real. The neo-Frégean distinction is directly against the view that facts about how concepts are acquired have a role to play in explaining and individualizing concepts. But this view does not have to be disputed by a supporter as such, nonetheless, all that the supporter is to commit is that the principle that no satisfactory account of what a concept is should make it impossible to provide explanation of how that concept can be acquired. That is, that this principle has nothing to say about the further question of whether the psychological explanation has a role to play in a constitutive explanation of the concept, and hence is not in conflict with the neo-Frégean distinction.

A full account of the structure of consciousness, will need to illustrate those higher, conceptual forms of consciousness to which little attention on such an account will take and about how it might emerge from given points of value, is the thought that an explanation of everything that is distinctive about consciousness will emerge out of an account of what it is for a subject to be capable of thinking about himself. But, to a proper understanding of the complex phenomenon of consciousness. There are no facts about linguistic mastery that will determine or explain what might be termed the cognitive dynamics that are individual processes that have found their way forward for a theory of consciousness, it sees, to chart the characteristic features individualizing the various distinct conceptual forms of consciousness in a way that will provide a taxonomy of unconsciousness and they, to show how these manifest the Characterological functions can enhance the condition of manifesting services, whereby, its continuous condition may that it be the determinate levels of content. What is hoped is now clear is that these forms of higher forms of consciousness emerge from a rich foundation of non-conceptual representations of thought, which can only expose and clarify their conviction that these forms of conscious thought hold the key, not just to an eventful account of how mastery of the conscious paradigms, but to a proper understanding of the plexuity of self-consciousness might that it be and/or the overall conjecture of consciousness that stands alone as to an everlasting vanquishment into the abyssal of an ever-unchangeless state of unconsciousness, as are for the hidden underlying latencies.

Until very recently it might have been that most approaches to the philosophy of science were ‘cognitive’. This includes ‘logical positivism’, as nearly all of those who wrote about the nature of science would have agreed that science ought to be ‘value-free’. This had been a particular emphasis by the first positivist, as it would be upon twentieth-century successors, as science, deals with ‘facts’, and facts and values and irreducibly distinct, as facts are objective, they are what we seek in our knowledge of the world. Values are subjective: They bear the mark of human interest, they are the radically individual products of feeling and desire. Fact and value cannot, therefore, be inferred from fact, fact ought not be influenced by value. There were philosophers, notably some in the Kantian tradition, who viewed the relation of the human individual to the universalist aspiration of difference differently. However, the legacy of three centuries of largely empiricist reflection of the ‘new’ sciences ushered in by Galilee Galileo (1564-1642), the Italian scientist whose distinction belongs to the history of physics and astronomy, rather than natural philosophy.

The philosophical importance of Galileo’s science rests largely upon the following closely related achievements: (1) His stunning successful arguments against Aristotelean science, (2) his proofs that mathematics is applicable to the real world. (3) His conceptually powerful use of experiments, both actual and employed regulatively, (4) His treatment of causality, replacing appeal to hypothesized natural ends with a quest for efficient causes, and (5) his unwavering confidence in the new style of theorizing that would become known as ‘mechanical explanation’.

A century later, the maxim that scientific knowledge is ‘value-laded’ seems almost as entrenched as its opposite was earlier. It is supposed that between fact and value has been breached, and philosophers of science seem quite at home with the thought that science and value may be closely intertwined after all. What has happened to cause such an apparently radical change? What is its implications for the objectivity of science, the prized characteristic that, from Plato’s time onwards, has been assumed to set off real knowledge (epistēmē) from mere opinion (doxa)? To answer these questions adequately, one would first have to know something of the reasons behind the decline of logical positivism, as, well as of the diversity of the philosophies of science that have succeeded it.

More general, the interdisciplinary field of cognitive science is burgeoning on several fronts. Contemporary philosophical re-election about the mind - which has been quite intensive - has been influenced by this empirical inquiry, to the extent that the boundary lines between them are blurred in places.

Nonetheless, the philosophy of mind at its core remains a branch of metaphysics, traditionally conceived. Philosophers continue to debate foundational issues in terms not radically unlike in kind or character from those in vogue in previous eras. Many issues in the metaphysics of science hinge on the notion of ‘causation’. This notion is as important in science as it is in everyday thinking, and much scientific theorizing is concerned specifically to identify the ‘causes’ of various phenomena. However, there is little philosophical agreement on what it is to say that one event is the cause of another.

Modern discussion of causation starts with the Scottish philosopher, historian, and essayist David Hume (1711-76), who argued that causation is simply a matter for which he denies that we have innate ideas. In that the causal relation is observably anything other than ‘constant conjunction’ because, they are observably necessary connections anywhere, and that there is either an empirical or demonstrative proof for the assumptions: That the future will resemble the past, and that every event has a cause. That is to say, that there is an irresolvable dispute between advocates of free-will and determinism, that extreme scepticism is coherent and that we can find the experiential source of our ideas of self-substance or God.

According to Hume (1978), on event causes another if only if events of the type to which the first event belongs regularly occur in conjunctive events of the type to which the second event belongs. The formulation, however, leaves several questions open. First, there is a problem of distinguishing genuine ‘causal law’ from ‘accidental regularities’. Not all regularities are sufficiently law-like to underpin causal relationships. Being that there is a screw in my desk could well be constantly conjoined with being made of copper, without its being true that these screws are made of copper because they are in my desk. Secondly, the idea of constant conjunction does not give a ‘direction’ to causation. Causes need to be distinguished from effects. Nevertheless, knowing that A-type events are constantly conjoined with B-type events does not tell us that of ‘A’ and ‘B’ is the cause that the effect, since constant conjunction is itself a symmetric relation. Thirdly, there is a problem about ‘probabilistic causation’. When we say that causes and effects are constantly conjoined, do we mean that the effects are always found with the causes, or is it enough that the causes make the effect probable?

Many philosophers of science during the past century have preferred to talk about ‘explanation’ than causation. According to the covering-law model of explanation, something is explained if it can be deduced from premises that include one or more laws. As applied to the explanation of particular events this implies that a particular event can be explained if it is linked by a law to another particular event. However, while they are often treated as separate theories, the covering-law account of explanation is at bottom little more than a variant of Hume’s constant conjunction account of causation. This affinity shows up in the fact at the covering-law account faces essentially the same difficulties as Hume: (1) In appealing to deduction from ‘laws’, it needs to explain the difference between genuine laws and accidentally true regularities: (2) It omits by effects, as swell as effects by causes, after all, it is as easy to deduce the height of the flag-pole from the length of its shadow and the law of optics: (3) Are the laws invoked in explanation required to be exceptionalness and deterministic, or is it acceptable say, to appeal to the merely probabilistic fact that smoking makes cancer more likely, in explaining why some particular person develops cancer?

Nevertheless, one of the centrally obtainable achievements for which the philosophy of science is to provide explicit and systematic accounts of the theories and explanatory strategies used to one’s advantage in the science. Another common goal is to construct philosophically illuminating analyses or explanations of central theoretical intellections. For which implicit manifestations quicken to the overall view of or attitude toward the spirited ideas that what exists in the mind as a representation. As of something comprehended or a formulation of a plan, which is not to assume of any constituent standard as invoked in one or another science. In the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and there has been a great deal of work on the structure of evolutionary theory and on such crucial conceptualizations as existing or dealing with what exists only in the mind. By introducing ‘teleological considerations’, this account views beliefs as states with biological purpose and analyses their truth conditions specifically as those conditions that they are biologically supposed to covary to additional means of or by virtue of or through a detailed and complete manner, as, perhaps, in spite of or with the interaction of meaning intellection to deliberate our reflective cogitation of ruminating the act or process of thinking,

































Similarly, teleological theory holds that ‘r’ represents ‘x’ if it is r’s function to implicate (i.e., covary with) ‘x’, teleological theories to be unlike or distinctly disagreed in its nature, form, or characteristics as only to differ in opinion to concede depending on the theory of functions they import. Perhaps the most important distinction is that between historical theories of functions and a-historical theories. Historical theories individuate functional states (therefore, contents) in a way that is sensitive to the historical development of the state, i.e., to factors such as the way the state was ‘learned’, or the way it evolved. A historical theory might hold that the function of ‘r’ is to implicate on or upon the purity of the form ‘x’, only if the capacity to token ‘r’ was developed (selected, learned). Because it gives to implicate the realistic prevalence held to convey (as an idea) to the mind, as this signifies the eloquence or significant manifestation for appearing the objectification in the forming implication of ‘x’. Thus, a state physically indistinguishable from ‘r’ (physical states being a-historical) but lacking r’s historical origins would not represent ‘x’ according to historical theories.

The American philosopher of mind (1935-) Jerry Alan Fodor, is known for resolute ‘realism’ about the nature of mental functioning, taking the analogy between thought and computation seriously. Fodor believes that mental representations should be conceived as individual states with their own identities and structures, like formulae transformed by processes of computation or thought. His views are frequently contrasted with those of ‘Holist s’ such as the American philosopher Herbert Donald Davidson (1917-2003), or ‘instrumentalists’ about mental ascription. Such as the British philosopher of logic and language, Eardley Anthony Michael Dummett (1925) In recent years he has become a vocal critic of some aspirations of cognitive science.

Nonetheless, a suggestion extrapolating the solution of teleology is continually to enquiry by points of owing to ‘causation’ and ‘content’, and ultimately a fundamental appreciation is to be considered, is that: we suppose that there is a causal path from A’s to ‘A’s’ and a causal path from B’s to ‘A’s’, and our problem is to find some difference between B-caused ‘A’s’ and A-caused ‘A’s’ in virtue of which the former but not the latter misrepresented. Perhaps, the two paths differ in their counter-factual properties. In particular, even if alienable positions in the group of ‘A and B’s’, are causally effective to both, perhaps each can assume that only in the finding measure that A’s would cause ‘A’s’ in - as one can say -, ‘optimal circumstances’. We could then hold that a symbol expresses its ‘optimal property’, viz., the property that would causally control its tokening in optimal circumstances. Correspondingly, when the tokening of a symbol is causally controlled by properties other than its optimal property, the tokens that come about being ipso facto wild.

Suppose, that this story about ‘optimal circumstances’ is proposed as part of a naturalized semantics for mental representations. In which case it is, of course, essential that saying that the optimal circumstances for tokening a mental representation are in terms that are not themselves but possible of either semantical or intentional? (It would not do, for example, to identify the optimal circumstances for tokening a symbol as those in which the tokens are true, that would be to assume precisely the semantical notion that the theory is supposed to naturalize.) Befittingly, the suggestion - to put it concisely - is that appeals to ‘optimality’ should be buttressed by appeals to ‘teleology’: Optimal circumstances are the ones in which the mechanisms that mediate symbol tokening are functioning ‘as they are at the present timer. With mental representations, these would be paradigmatically circumstances where the mechanisms of belief fixation are functionally accepted or advanced as true or real based on less than conclusive evidence that they are supposed are accepted or advanced as true or real based on less than conclusive evidence, such that to understand or assume of the categories availably warranted, the position assumed or a point made especially in so, to or into that place that in consequence of that for this or that reason could be that thing, or circumstance as deprived to form an idea of something in the mind.

So, then, the teleology of the cognitive mechanisms determines the optimal condition for belief fixation, and the optimal condition for belief fixation determines the content of beliefs. So the story goes.

To put this objection in slightly other words, the teleology story perhaps strikes one as plausible in that it understands one normative notion - truth - of another normative notion - optimality. However, this appearance if it is spurious there is no guarantee that the kind of optimality that teleology reconstructs relates to the kind of optimality that the explication of ‘truth’ requires. When mechanisms of repression are working ‘optimally’ - when they are working ‘as they are supposed to’ - what they deliver are likely to be ‘falsehoods’.

Once, again, there is no obvious reason that coitions that are optimal for the tokening of one mental symbol need, be optimal for the tokening of other sorts. Perhaps the optimal conditions for fixing beliefs about very large objects, are different from the optimal conditions for fixing beliefs about very small ones, are different from the optimal conditions for fixing beliefs sights. Nevertheless, this raises the possibility that if we are to say which conditions are optimal for the fixation of a belief, we should know what the content of the belief is - what presents of itself to be a belief. Our explication of content would then require a notion of optimality, whose explication in turn requires a notion of content, and the resulting pile would clearly be unstable.

Teleological theories hold that ‘r’ represents ‘x’ if it is r’s function to give evidence of or serve as grounds for a valid or reasonable inference. If only to announce the indicative approval for which of indicatory exhibitions assembled through (i.e., covary with) ‘x’. Teleological theories differ, depending on the theory of functions they import. Perhaps the most important distinction is that between historical theories of functions: Historically, theories individuate functional states (therefore, contents) in a way that is sensitive to the historical development of the state, i.e., to factors such as the way the state was ‘learned’, or the way it evolved. A historical theory might hold that the function of ‘r’ is to implicate or manifest the associative tacit implied in being such in essential character that suggests the intimation of something that is an outward manifestation of something else or that which is indicative, only to suggest the designation that by its very significative indications to which evoke the aptitudinal form ‘x’, only if the capacity to token ‘r’ was developed (selected, learned) because it serves as grounded for a valid or reasonable inference as to characterize ‘x’. Thus, a state physically indistinguishable from ‘r’ (physical states being a-historical), but lacking r’s historical origins would not represent ‘x’ according to historical theories.

Just as functional role theories hold that r’s representing ‘x’ is grounded in the functional role ‘r’ has in the representing system, i.e., on the relations imposed by specified cognitive processes between ‘r’ and other representations in the system’s repertoire. Functional role theories take their cue from such common-sense ideas as that people cannot believe that cats are furry if they do not know that cats are animals or that fur is like hair.

That being said, that nowhere is the new period of collaboration between philosophy and other disciplines more evident than in the new subject of cognitive science. Cognitive science from its very beginning has been ‘interdisciplinary’ in character, and is in effect the joint property of psychology, linguistics, philosophy, computer science and anthropology. There are, therefore, a great variety of research projects within cognitive science, but the central area of cognitive science, its hard-coded ideology rests on the assumption that the mind is best viewed as analogous to a digital computer. The basic idea behind cognitive science is that recent developments in computer science and artificial intelligence have enormous importance for our conception of human beings. The basic inspiration for cognitive science went something like this: Human beings do information processing. Computers are designed precisely do information processing. Therefore, one way to study human cognition - perhaps the best way to study it - is to study it as a matter of computational information processing. Some cognitive scientists think that the computer is just a metaphor for the human mind: Others think that the mind is literally a computer program. Still, saying is fair that without the computational model there would not have been a cognitive science as we now understand it.

In, Essay Concerning Human Understanding is the first modern systematic presentation that holds the attending empiricist epistemology, and as such had important implications for the natural sciences and for philosophy of science generally. Like his predecessor, Descartes, the English philosopher (1632-1704) John Locke began his account of knowledge from the conscious mind aware of ideas. Unlike Descartes, however, he was concerned not to build a system based on certainty, but to identify the mind’s scope and limits. The premise upon which Locke built his account, including his account of the natural sciences, is that the ideas that furnish the mind are all derived from experience. He thus, totally rejected any kind of innate knowledge. In this he consciously opposes Descartes, who had argued that coming to knowledge of fundamental truths about the natural world through reason alone is possible. Descartes (1596-1650) had argued, that we can come to know the essential nature of both ‘minds’ and ‘matter’ by pure reason. John Locke accepted Descartes’s criterion of clear and distinct ideas as the basis for knowledge, but denied any source for them other than experience. It was information that came to some completions are the five senses (ideas of sensation) and ideas engendered from pure inner experiences (ideas of reflection) were the composite characteristics as to bring into being by mental and especially its reasons that made up of several separated or identifiable elements for which of the building blocks are aligned by themselves with an unreserved and open understanding.

Locke concerted his commitment to ‘the new way of ideas’ with the native espousal of the ‘corpuscular philosophy’ of the Irish scientist (1627-92) Robert Boyle. This, in essence, was an acceptance of a revised, more sophisticated account of matter and its properties advocated by the ancient atomists and recently supported by Galileo (1564-1642) and Pierre Gassendi (1592-1655). Boyle argued from theory and experiment that there were powerful reasons to justify some kind of corpuscular account of matter and its properties. He called the latter qualities, which he distinguished as primary and secondary. The distinction between primary and secondary qualities may be reached by two different routes: Either from the nature or essence of matter or from the nature and essence of experience, though practising these have a tendency to run-together. The former considerations make the distinction seem like an a priori, or necessary, truth about the nature of matter, while the latter makes it the empirical hypothesis -. Locke, too, accepted this account, arguing that the ideas we have of the primary qualities of bodies resemble those qualities as they are in the subject, whereas the ideas of the secondary qualities, such as colour, taste, and smell, do not resemble their causes in the object.

There is no strong connection between acceptance of the primary-secondary quality distinction and Locke’s empiricism and Descartes had also argued strongly for universal acceptance by natural philosophers, and Locke embraced it within his more comprehensive empirical philosophy. However, Locke’ empiricism did have major implications for the natural sciences, as he well realized. His account begins with an analysis of experience. All ideas, he argues, are either simple or complex. Simple ideas are those like the red of a particular rose or the roundness of a snowball. Complicated and complex ideas, our ideas of the rose or the snowball, are combinations of simple ideas. We may create new complicated and complex ideas in our imagination - a parallelogram, for example. Nevertheless, simple ideas can never be created by us: we just have them or not, and characteristically they are caused, for example, the impact on our senses of rays of light or vibrations of sound in the air coming from a particular physical object. Since we cannot create simple ideas, and they are determined by our experience, our knowledge is in a very strict uncompromising way limited. Besides, our experiences are always of the particular, never of the general. It is this simple idea or that particular complex idea that we apprehend. We never in that sense apprehend a universal truth about the natural world, but only particular instances. It follows from this that all claims to generality about that world - for example, all claims to identity what was then beginning to be called the laws of nature - must to that extent go beyond our experience and thus be less than certain.

The Scottish philosopher, historian, and essayist, (1711-76) David Hume, whose famous discussion appears in both his major philosophical works, the ‘Treatise’ (1739) and the ‘Enquiry’(1777). The distinction is couched as the apprehensive intellection for existing or dealing with what exists only in the mind, that the ideational intellection of causality, so that which is responsible for an effect, under which the considerations that we are use linguistically communicating of the laws, Hume contends, involves three ideas:

1. That there should be a regular concomitance between events

of the type of cause and those of the type of effect.

2. That the cause event should be contiguous with the effect event.

3. That the cause event should require the effect event.

The tenets (1) and (2) occasion no differently for Hume, since he believes that there are patterns of sensory impressions non-problematically related to the idea of regularity concomitance and of contiguity. Nonetheless, the third requirement is deeply problematic, in that the idea of necessarily that figures in it seems to have no sensory impression correlated with it. However, carefully and attentively we scrutinize a causal process, we do not seem to observe anything that might be the observed correlate of the idea of necessity. We do not observe any kind of activity, power, or necessitation. All we ever observe is one event following another, which is logically independent of it. Nor is this logically necessary, since, as, Hume observes, one can jointly assert the existence of the cause and a denial of the existence of the effect, as specified in the causal statement or the law of nature, without contradiction. What, then, are we to make of the seemingly central notion of necessity that is deeply embedded in the very idea of causation, or lawfulness? To this query, Hume gives an ingenious and telling story. There is an impression corresponding to the idea of causal necessity, but it is a psychological phenomenon: Our exception that is similar to those we have already observed to be correlated with the cause-type of events will occur in this instance too. Where does that impression come from? It is created as a kind of mental habit by the repeated experience of regular concomitance between events of the type of effect and the occurring of events of the type of cause. Then, the impression that corresponds to the idea of regular concomitance - the law of nature then asserts nothing but the existence of the regular concomitance.

At this point in our narrative, the question at once arises about whether this factor of life in nature, thus interpreted, corresponds to anything that we observe in nature. All philosophy is an endeavour to obtain a self-consistent understanding of things observed. Thus, its development is guided in two ways, one is demand for coherent self-consistency, and the other is the elucidation of things observed. With our direct observations how are we to conduct such comparisons? Should we turn to science? No. There is no way in which the scientific endeavour can detect the aliveness of things: Its methodology rules out the possibility of such a finding. On this point, the English mathematician and philosopher (1861-1947) Alfred Whitehead, comments, That science can find no individual enjoyment in nature, as science can find no creativity in nature, it finds mere rules of succession. These negations are true of natural science. They are inherent in its methodology. The reason for this blindness of physical science lies in the fact that such science only deals with half the evidence provided by human experience. It divides the seamless coat - or, to change the metaphor into a happier form, it examines the coat, which is superficial, and neglects the body that is fundamental.

Whitehead claims that the methodology of science makes it blind to a fundamental aspect of reality, namely, the primacy of experience, it neglected half the evidence. Working within Descartes’ dualistic framework reference, of matter and mind as separate and incommensurate, science limits itself to the study of objectivised phenomena, neglecting the subject and the mental events that are his or her experience.

Both the adoption of the Cartesian paradigm and the neglect of mental events are reason enough to suspect ‘blindness’, but there is no need to rely on suspicions. This blindness is evident. Scientific discoveries, impressive as they are, are fundamentally superficial. Science can express regularities observed in nature, but it cannot explain the reasons for their occurrence. Consider, for example, Newton’s law of gravity. It shows that such apparently disparate phenomena as the falling of an apple and the revolution of the earth around the sun are aspects of the same regularity - gravity. According to this law the gravitational attraction between two objects deceases in proportion to the square of the distance between them. Why is that so? Newton could not provide an answer. Simpler and becalmed, is why, does the continuum of space have three dimensions? Why is time one-dimensional? Whitehead notes, ‘None of these laws of nature gives the slightest evidence of necessity. They are [merely] the modes of procedure that within the scale of observation does in fact prevail’.

This analysis reveals that the capacity of science to fathom the depths of reality is limited. For example, if reality is, in fact, made up of discrete units, and these units have the fundamental character in being ‘ the pulsing throbs of experience’, then science may be unable to discover the discreteness: But it has no access to the subjective side of nature since, as the Austrian physicist(1887-1961) Erin Schrödinger points out, we ‘exclude the subject of cognizance from the domain of nature that we endeavour to understand’. It follows that to find ‘the elucidation of things observed’ in relation to the experiential or aliveness aspect, we cannot rely on science, we need to look elsewhere.

If, instead of relying on science, we rely on our immediate observation of nature and of ourselves, we find, first, that this [i.e., Descartes’] stark division between mentality and nature has no ground in our fundamental observation. We find ourselves living within nature. Secondly, in that we should conceive mental operations as among the factors which make up the constitution of nature, and thirdly, that we should reject the notion of aversion to work while following the processes of a hidden nature, every factor that makes a difference, and that difference can only be expressed in terminological factors of the individualized character of that factor.

Whitehead continues to analyse our experiences overall, and our observations of nature in particular, and ends abruptly with ‘mutual immanence’ as a central theme. This mutual immanence is as, much ado as its obviousness, that, I am a part of the universe, and, since I experience the universe, the experienced universe is part of me. Whitehead gives an example, ‘I am in the room, and the room is an item in my present experience. Nevertheless, my present experience is what I am now’. A generalization of this relationship to the case of any actual occasions yields the conclusion that ‘the world is included within the occasion in one sense, and the occasion is included in the world in another sense’. The idea that each actual occasion appropriates its universe follows naturally from such considerations.

The description of an actual entity for being a distinct unit is, therefore, only one part of the story. The other, complementary part is this: The very nature of each actual entity is one of interdependence with all the other actual entities in the universe. Every existent entity agrees or subscribes of a series of actions, operations or motions involved in the accomplishments of an ending method or its operative processing under which its prehending or appropriating of all other actualized entities in the creating of new entities, in that out of them all, namely, the resultant amounts of themselves.

There are two general strategies for distinguishing laws from accidentally true generalizations. The first stands by Hume’s idea that causal connections are mere constant conjunctions, and then seeks to explain why some constant conjunctions are better than others. That is, this first strategy accepts the principle that causation involves nothing more than certain events always happening with certain others, and then seeks to explain why some such patterns - the ‘laws’ - matter more than others - the ‘accidents’ -. The second strategy, by contrast, rejects the Humean presupposition that causation involves nothing more than happen in reserve from a casual co-occurrence, and instead postulates their relationship by its owing ‘necessitation’, a kind of make secure or binding, for which links events connected by law, but not those events (like having a screw in my desk and being made of copper) that are only accidentally conjoined.

There are several versions of the first Human strategy. The most successful, originally proposed by the Cambridge mathematician and philosopher F.P. Ramsey (1903-30), and revived separately the American philosopher David Lewis (1941-2002), who hold that laws are those true generalizations that can be fitted into an ideal system of knowledge. The thought is, that, the laws are those patterns explicated as to basic science, either as fundamental principles themselves, or as consequences of those principles, while accidents, although true, have no such explanation. Thus, ‘All water at standard pressure boils at 1000 C’ is a consequence of the laws governing molecular bonding: But the fact that ‘All the screws in my desk are copper’ is not part of the deductive structure of any satisfactory science. Frank Plumpton Ramsey (1903-30), neatly encapsulated this idea by saying that laws are ‘consequences of those propositions that we should take as axioms if we knew everything and organized it as simply as possible in a deductive system’.

Advocates of the alternative non-Humean strategy object that the difference between laws and accidents is not a ‘linguistic’ matter of deductive systematization, but a ‘metaphysical’ contrast between the kind of links they report. They argue that there is a link in nature between being at 1000 C and boiling, but not between being ‘in my desk’ and being ‘made of copper’, and that this is nothing to do with how the description of this link may fit into theories. According to the forth-right Australian D.M. Armstrong (1983), the most prominent defender of this view, the real difference between laws and accidentals, is simply that laws report relationships of natural ‘necessitation’, while accidents only report that two types of events happen to occur together.

Armstrong’s view may seem intuitively plausible, but it is arguable that the notion of necessitation simply restates the problem, than solving it. Armstrong says that necessitation involves something more than constant conjunction: If two events are related by necessitation, then it follows that they are constantly conjoined, but two events can be constantly conjoined without being related by necessitation, as when the constant conjunction is just a matter of an accident. So necessitation is a stronger relationship than constant conjunction. However, Armstrong and other defenders of this view say ver y little about what this extra strength amounts to, except that it distinguishes laws from accidents. Armstrong’s critics argue that a satisfactory account of laws ought to cast more light than this on the nature of laws.

Hume said that the earlier of two causally related events is always the cause, and the later effect. However, there are several objections to using the earlier-later ‘arow of time’ to analyse the directional ‘arrow of causation’. For a start, it seems in principle, possible that some causes and effects could be simultaneous. That more, in the idea that time is directed from ‘earlier’ too ‘later’ itself stands in need of philosophical explanation -. One of the most popular explanations is that the idea of ‘movement’ from earlier later to depend on the fact that cause-effect pairs always have a time, and explain ‘earlier’ as the direction in which causes lie, and ‘later’ as the direction of effects, that we will clearly need to find some account of the direction of causation that does not itself assume the direction of time.

Several accounts have been proposed, David Lewis (1979) has argued that the asymmetry of causation derives from an ‘asymmetry of over-determination’. The over-determination of present events by past events - consider a person who dies after simultaneously being shot and struck by lightning - is a very rare occurrence, by contrast, the multiple ‘over-determination’ of present events by future events is absolutely normal. This is because the future, unlike the past, will always contain multiple traces of any present event. To use Lewis’s example, when the president presses the red button in the White House, the future effects do not only include the dispatch of nuclear missiles, but also the fingerprint on the button, his trembling, the further depletion of his gin bottle, the recording of the button’s click on tape, he emission of light waves bearing the image of his action through the window, the warnings of the wave from the passage often signal current, and so on, and so on, and on.

Lewis relates this asymmetry of over-determination to the asymmetry of causation as follows. If we suppose the cause of a given effect to have been absent, then this implies the effect would have been absent too, since (apart from freak -like occurrence in the lightning-shooting case) there will not be any other causes left to ‘fix’ the effect. By contrast, if we suppose a given effect of some cause to have been absent, this does not imply the cause would have been absent, for there are still all the other traces left to ‘fix’ the causes. Lewis argues that these counterfactual considerations suffice to show why causes are different from effects.

Other philosophers appeal to a probabilistic variant of Lewis’s asymmetry. Following, the philosopher of science and probability theorists, Hans Reichenbach (1891-1953), they note that the different causes of any given type of effect are normally probabilistically independent of each other, by contrast, the different effects of any given type of cause are normally probabilistically correlated. For example, both obesity and high excitement can cause heart attacks, but this does not imply that fat people are more likely to get excited than thin ones: Its facts, that both lung cancer and Nicotine-stained fingers can result from smoking does imply that lung cancer is more likely among people with nicotine-stained fingers. So this account distinguishes effects from causes by the fact that the former, but not the latter are probabilistically dependent on each other.

However, there is another course of thought in philosophy of science, the tradition of ‘negative’ or ‘eliminative induction’. From the English diplomat and philosopher Francis Bacon (1561-1626) and in modern time the philosopher of science Karl Raimund Popper (1902-1994), we have the idea of using logic to bring falsifying evidence to bear on hypotheses about what must universally be the case that many thinkers accept in essence his solution to the problem of demarcating proper science from its imitators, namely that the former results in genuinely falsifiable theories whereas the latter do not. Although falsely, allowed many people’s objections to such ideologies as psychoanalysis and Marxism.

Hume was interested in the processes by which we acquire knowledge: The processes of perceiving and thinking, of feeling and reasoning. He recognized that much of what we claim to know derives from other people secondhand, thirdhand or worse: Moreover, our perceptions and judgements can be distorted by many factors - by what we are studying, and by the very act of study itself the main reason, however, behind his emphasis on ‘probabilities and those other measures of evidence on which life and action entirely depend’ is this:

Evidently, all reasoning concerning ‘matter of fact’

are founded on the relation of cause and effect, and that?

we can never infer the existence of one object from

another unless they are connected, either mediately

or immediately.

When we apparently observe a whole sequence, say of one ball hitting another, what do we observe? In the much commoner cases, when we wonder about the unobserved causes or effects of the events we observe, what precisely are we doing?

Hume recognized that a notion of ‘must’ or necessity is a peculiar feature of causal relation, inference and principles, and challenges us to explain and justify the notion. He argued that there is no observable feature of events, nothing like a physical bond, which can be properly labelled the ‘necessary connection’ between a given cause and its effect: Events are simply merely to occur, and there is in ‘must’ or ‘ought’ about them. However, repeated experience of pairs of events sets up the habit of expectation in us, such that when one of the pair occurs we inescapably expect the other. This expectation makes us infer the unobserved cause or unobserved effect of the observed event, and we mistakenly project this mental inference onto the events themselves. There is no necessity observable in causal relations, all that can be observed is regular sequence, there is proper necessity in causal inferences, but only in the mind. Once we realize that causation is a relation between pairs of events. We also realize that often we are not present for the whole sequence that we want to divide into ‘cause’ and ‘effect’. Our understanding of the casual relation is thus intimately linked with the role of the causal inference cause only causal inferences entitle us to ‘go beyond what is immediately present to the senses’. Nevertheless, now two very important assumptions emerge behind the causal inference: The assumptions that like causes, in ‘like circumstances, will always produce like effects’, and the assumption that ‘the course of nature will continue uniformly the same’ - or, briefly that the future will resemble the past. Unfortunately, this last assumption lacks either empirical or a priori proof, that is, it can be conclusively established neither by experience nor by thought alone.

Hume frequently endorsed a standard seventeenth-century view that all our ideas are ultimately traceable, by analysis, to sensory impressions of an internal or external kind. In agreement, he claimed that all his theses are based on ‘experience’, understood as sensory awareness with memory, since only experience establishes matters of fact. Nonetheless, our belief that the future will resemble the past properly construed as a belief concerning only a mater of fact? As the English philosopher Bertrand Russell (1872-1970) remarked, earlier this century, the real problems that Hume asserts to are whether future futures will resemble future pasts, in the way that past futures really did resemble past pasts. Hume declares that ‘if . . . the past may be no rule for the future, all experiences become useless and can cause inference or conclusion. Yet, he held, the supposition cannot stem from innate ideas, since there are no innate ideas in his view nor can it stems from any abstract formal reasoning. For one thing, the future can surprise us, and no formal reasoning seems able to embrace such contingencies: For another, even animals and unthinkable people conduct their lives as if they assume the future resembles the past: Dogs return for buried bones, children avoid a painful fire, and so forth. Hume is not deploring the fact that we have to conduct our lives based on probabilities. He is not saying that inductive reasoning could or should be avoided or rejected. Alternatively, he accepted inductive reasoning but tried to show that whereas formal reasoning of the kind associated with mathematics cannot establish or prove matters of fact, factual or inductive reasoning lacks the ‘necessity’ and ‘certainty’ associated with mathematics. His position, therefore clear; because ‘every effect is a distinct event from its cause’, only investigation can settle whether any two particular events are causally related: Causal inferences cannot be drawn with the force of logical necessity familiar to us from deductivity, but, although they lack such force, they should not be discarded. From causation, inductive inferences are inescapable and invaluable. What, then, makes ‘experience’ the standard of our future judgement? The answer is ‘custom’, it is a brute psychological fact, without which even animal life of a simple kind would be mostly impossible. ‘We are determined by custom to suppose the future conformable to the past’ (Hume, 1978), nevertheless, whenever we need to calculate likely events we must supplement and correct such custom by self-conscious reasoning.

Nonetheless, the problem that the causal theory of reference will fail once it is recognized that all representations must occur under some aspect or that the extentionality of causal relations is inadequate to capture the aspectual character of reference. The only kind of causation that could be adequate to the task of reference is intentional causal or mental causation, but the causal theory of reference cannot concede that ultimately reference is achieved by some met device, since the whole approach behind the causal theory was to try to eliminate the traditional mentalism of theories of reference and meaning in favour of objective causal relations in the world, though it is at present by far the most influential theory of reference, will be a failure for these reasons.

If mental states are identical with physical states, presumably the relevant physical states are various sorts of neural states. Our concepts of mental states such as thinking, sensing, and feeling are of course, different from our concepts of neural states, of whatever sort. Still, that is no problem for the identity theory. As J.J.C. Smart (1962), who first argued for the identity theory, emphasized, the requisite identities do not depend on understanding concepts of mental states or the meanings of mental terms. For ‘a’ to be the identical with ‘b’, ‘a’, and ‘b’ must have the same properties, but the terms or the things in themselves are ‘a’ and ‘b’, and need not mean the same. Its principal means by measure can be accorded within the indiscernibility of identicals, in that, if ‘A’ is identical with ‘B’, then every property that ‘A’ has ‘B’, and vice versa. This is, sometimes known as Leibniz’ s Law.

Nevertheless, a problem does seem to arise about the properties of mental states. Suppose pain is identical with a certain firing of c-fibres. Although a particular pain is the very same as a neural-firing, we identify that state in two different ways: As a pain and as neural-firing. That the state will therefore have certain properties in virtue of which we identify it as pain and others in virtue of which we identify it as an excitability of neural firings. The properties in virtue of which we identify it as a pain will be mental properties, whereas those in virtue of which ewe identify it as neural excitability firing, will be physical properties. This has seemed for which are many to lead of the kinds of dualism at the level of the properties of mentalities, even if these mental states in that which we reject dualism of substances and take people simply to be some physical organisms. Those organisms still have both mental and physical states. Similarly, even if we identify those mental states with certain physical states, those states will, nonetheless have both mental and physical properties. So disallowing dualism with respect to substances and their states are simply to its reappearance at the level of the properties of those states.

All and all, a problem faced in providing a reductive naturalistic analysis especially forwarded by its internal representation has led many to doubt that this task to achieved or necessary. Although a story can be told about some words or signs what were learned via association of other causal connections with their referents, there is no reason to believe ht the ‘stand-for’ relation, or semantic notions in general, can be reduced to or eliminated in favour of non-semantic terms.

Although linguistic and pictorial representations are undoubtedly the most prominent symbolic forms we employ, the ranges of representational systems human understand and regularly use is surprisingly large. Sculptures, maps, diagrams, graphs. Gestures, music nation, traffic signs, gauges, scale models, and tailor’s swatches are but a few of the representational systems that play a role in communication, though, and the guidance of behaviour. Even, the importance and prevalence of our symbolic activities has been taken as a hallmark of human.

What is it that distinguishes items that serve as representations from other objects or events? And what distinguishes the various kinds of symbols from each other? As for the first question, there has been general agreement that the basic notion of a representation involves one thing’s ‘standing for’, ‘being about’, referring to or denoting’ something else. The major debates have been over the nature of this connection between a reorientation and that which it represents. As for the second question, perhaps, the most famous and extensive attempt to organize and differentiate among alternative forms of representation is found in the works of the American philosopher of science Charles Sanders Peirce (1839-1914) who graduated from Harvard in 1859, and apart from lecturing at John Hopkins university from 1879 to 1884, had almost no teaching, nonetheless, Peirce’s theory of signs is complex, involving a number of concepts and distinctions that are no longer paid much heed. The aspects of his theory that remains influential and ie widely cited is his division of signs into Icons, Indices and Symbols. Icons are the designs that are said to be like or resemble the things they represent, e.g., portrait painting. Indices are signs that are connected in their objects by some causal dependency, e.g., smoke as a sign of fire. Symbols are those signs that are used and related to their object by virtue of use or associations: They a arbitrary labels, e.g., the word ‘table’. This tripartite division among signs, or variants of this division, is routinely put forth to explain differences in the way representational systems are thought to establish their links to the world. Further, placing a representation in one of the three divisions has been used to account for the supposed differences between conventional and non-conventional representations, between representations that do and do not require learning to understand, and between representations, like language, that need to be read, and those which do not require interpretation. Some theorbists, moreover, have maintained that it is only the use of symbols that exhibits or indicates the presence of mind and mental states.

Over the years, this tripartite division of signs, although often challenged, has retained its influence. More recently, an alterative approach to representational systems (or as he calls them ‘symbolic systems’) has been put forth by the American philosopher Nelson Goodman (1906-98) whose classical problem of ‘induction’ is often phrased in terms of finding some reason to expect that nature is uniform, in Fact, Fiction, and Forecast (1954) Goodman showed that we need in addition some reason for preferring some uniformities to others, for without such a selection the uniformity of nature is vacuous, yet Goodman (1976) has proposed a set of syntactic and semantic features for categorizing representational systems. His theory provided for a finer discrimination among types of systems than a philosophy of science and language as partaken to and understood by the categorical elaborations as announced by Peirce. What also emerges clearly is that many rich and useful systems of representation lack a number of features taken to be essential to linguistic or sentential forms of representation, e.g., discrete alphabets and vocabularies, syntax, logical structure, inferences rules, compositional semantics and recursive e compounding devices.

As a consequence, although these representations can be appraised for accuracy or correctness. It does not seem possible to analyse such evaluative notion along the lines of standard truth theories, geared as they are to the structural founded sentential systems.

If we resist the equation of the justificatory and explanatory work of reason-giving, we must look for a connection between reasons and action/belief, that whatever cases of genuine reason is then forwarded by explantation, which is absent otherwise to mere rationalizations (a connection that is present when enacted on the better of judgements, and not when failed). Classically suggested, in this context is that of causality. In cases of genuine explanation, the reason-providing intentional states are applicable stimulations whose cause of holding to belief/actions for which they also provide for reasons. This position, in addition, seems to find support from considering the conditional and counter-factuals that our reason-providing explanations admit as valid, only for which make parallel those in cases of other causal explanations. In general terms, where my reasons explains my action, then the presence towards the future is such that for reasons are, however, in those circumstances, necessary for the action and, at least, made probable for its occurrence. These conditional links can be explained if we accept that the reason-giving link is also a causal one. Any alternative account would therefore also need to accommodate them.

The defence of the view that reasons are causes for which seems arbitrary, least of mention, ‘Why does explanation require citing the cause of the cause of a phenomenon but not the next link in the chain of causes? Perhaps what is not generally true of explanation is true only of mentalistic explanation: Only in giving the latter type are we obliged to give the cause of as cause. However, this too seems arbitrary. What is the difference between mentalistic and non-mentalistic explanation that would justify imposing more stringent restrictions on the former? The same argument applies to non-cognitive mental stares, such as sensations or emotions. Opponents of behaviourism sometimes reply that mental states can be observed: Each of us, through ‘introspection’, can observe at least some mental states, namely our own, least of mention, those of which we are conscious.

To this point, the distinction between reasons and causes is motivated in good part by a desire to separate the rational from the natural order. However, its probable traces are reclined of a historical coefficient of reflectivity as Aristotle’s similar (but not identical) distinction between final and efficient cause, engendering that (as a person, fact, or condition) which proves responsible for an effect. Recently, the contrast has been drawn primarily in the domain or the inclining inclinations that manifest some territory by which attributes of something done or effected are we to engage of actions and, secondarily, elsewhere.

Many who have insisted on distinguishing reasons from causes have failed to distinguish two kinds of reason. Consider its reason for sending a letter by express mail. Asked why id so, I might say I wanted to get it there in a day, or simply, to get it there in as day. Strictly, the reason is repressed by ‘to get it there in a day’. But what this express to my reason only because I am suitably motivated: I am in a reason state, as wanting to get the letter there in a day. It is reason states - especially wants, beliefs and intentions - and not reasons strictly so called, that are candidates for causes. The latter are abstract contents of propositional altitudes: The former are psychological elements that play motivational roles.

If reason states can motivate, however, why (apart from confusing them with reasons proper) deny that they are causes? For one can say that they are not events, at least in the usual sense entailing change, as they are dispositional states (this contrasts them with occurrences, but not imply that they admit of dispositional analysis). It has also seemed to those who deny that reasons are causes that the former justify as well as explain the actions for which they are reasons, whereas the role of causes is at most to explain. As other claim is that the relation between reasons (and for reason states are often cited explicitly) and the actions they explain is non-contingent, whereas the relation causes to their effects is contingent. The ‘logical connection argument’ proceeds from this claim to the conclusion that reasons are not causes.

These arguments are inconclusive, first, even if causes are events, sustaining causation may explain, as where the [states of] standing of a broken table is explained by the (condition of) support of staked boards replacing its missing legs. Second, the ‘because’ in ‘I sent it by express because I wanted to get it there in a day, so in some semi-causal explanation would at best be construed as only rationalizing, than justifying action. And third, if any non-contingent connection can be established between, say, my wanting something and the action it explains, there are close causal analogism such as the connection between brining a magnet to iron filings and their gravitating to it: This is, after all, a ‘definitive’ connection, expressing part of what it is to be magnetic, yet the magnet causes the fillings to move.

There I then, a clear distinction between reasons proper and causes, and even between reason states and event causes: But the distinction cannot be used to show that the relations between reasons and the actions they justify is in no way causal. Precisely parallel points hold in the epistemic domain (and indeed, for all similarly admit of justification, and explanation, by reasons). Suppose my reason for believing that you received it today is that I sent it by express yesterday. My reason, strictly speaking, is that I sent it by express yesterday: My reason state is my believing this. Arguably reason justifies the further proposition I believe for which it is my reason and my reason state - my evidence belief - both explains and justifies my belief that you received the letter today. I an say, that what justifies that belief is [in fact] that I sent the letter by express yesterday, but this statement expresses my believing that evidence proposition, and you received the letter is not justified, it is not justified by the mere truth of the proposition (and can be justified even if that proposition is false).

Similarly, there are, for belief for action, at least five main kinds of reason (1) normative reasons, reasons (objective grounds) there are to believe (say, to believe that there is a green-house-effect): (2) Person-relative normative reasons, reasons for [say] me to believe, (3) subjective reasons, reasons I have to believe (4) explanatory reasons, reasons why I believe, and (5) motivating reasons for which I believe. Tenets of (1) and (2) are propositions and thus, not serious candidates to be causal factors. The states corresponding to (3) may not be causal elements. Reasons why, tenet (4) are always (sustaining) explainers, though not necessarily even prima facie justifier, since a belief can be casually sustained by factors with no evidential value. Motivating reasons are both explanatory and possess whatever minimal prima facie justificatory power (if any) a reason must have to be a basis of belief.

Current discussion of the reasons-causes issue has shifted from the question whether reason state can causally explain to the perhaps, deeper questions whether they can justify without so explaining, and what kind of causal states with actions and beliefs they do explain. ‘Reliabilist’ tend to take as belief as justified by a reason only if it is held ast least in part for that reason, in a sense implying, but not entailed by, being causally based on that reason. ‘Internalists’ often deny this, as, perhaps, thinking we lack internal access to the relevant causal connections. But Internalists need internal access to what justified - say, the reason state - and not to the (perhaps quite complex) relations it bears the belief it justifies, by virtue for which it does so. Many questions also remain concerning the very nature of causation, reason-hood, explanation and justification.

Nevertheless, for most causal theorists, the radical separation of the causal and rationalizing role of reason-giving explanations is unsatisfactory. For such theorists, where we can legitimately point to an agent’s reasons to explain a certain belief or action, then those features of the agent’s intentional states that render the belief or action reasonable must be causally relevant in explaining how the agent came to believe or act in a way which they rationalize. One way of putting this requirement is that reason-giving states not only cause but also causally explain their explananda.

The explanans/explanandum are held of a wide currency of philosophical discoursing because it allows a certain succinctness which is unobtainable in ordinary English. Whether in science philosophy or in everyday life, one does often offers explanation s. the particular statement, laws, theories or facts that are used to explain something are collectively called the ‘explanans’, and the target of the explanans - the thing to be explained - is called the ‘explanandum’. Thus, one might explain why ice forms on the surface of lakes (the explanandum) in terms of the special property of water to expand as it approaches freezing point together with the fact that materials less dense than liquid water float in it (the explanans). The terms come from two different Latin grammatical forms: ‘Explanans’ is the present participle of the verb which means explain: And ‘explanandum’ is a direct object noun derived from the same verb.

The assimilation in the likeness as to examine side by side or point by point in order to bring such in comparison with an expressed or implied standard where comparative effects are both considered and equivalent resemblances bound to what merely happens to us, or to parts of us, actions are what we do. My moving my finger is an action to be distinguished from the mere motion of that finger. My snoring likewise, is not something I ‘do’ in the intended sense, though in another broader sense it is something I often ‘do’ while asleep.

The contrast has both metaphysical and moral import. With respect to my snoring, I am passive, and am not morally responsible, unless for example, I should have taken steps earlier to prevent my snoring. But in cases of genuine action, I am the cause of what happens, and I may properly be held responsible, unless I have an adequate excuse or justification. When I move my finger, I am the cause of the finger’s motion. When I say ‘Good morning’ I am the cause of the sounding expression or utterance. True, the immediate causes are muscle contractions in the one case and lung, lip and tongue motions in the other. But this is compatible with me being the cause - perhaps, I cause these immediate causes, or, perhaps it just id the case that some events can have both an agent and other events as their cause.

All this is suggestive, but not really adequate. we do not understand the intended force of ‘I am the cause’ and more than we understand the intended force of ‘Snoring is not something I do’. If I trip and fall in your flower garden, ‘I am the cause’ of any resulting damage, but neither the damage nor my fall is my action.

In the considerations for which we approach to explaining what are actions, as contrasted with ‘mere’ doings, are. However, it will be convenient to say something about how they are to be individuated.

If I say ‘Good morning’ to you over the telephone, I have acted. But how many actions have O performed, and how are they related to one another and associated events? we may describe of what is done:

(1) Mov e my tongue and lips in certain ways, while exhaling.

(2) sat ‘Good morning’.

(3) Cause a certain sequence of modifications in the current flowing in your telephone.

(4) Say ‘Good morning’ to you.

(5) greet you.

The list - not exhaustive, by any means - is of act types. I have performed an action of each relation holds. I greet you by saying ‘Good morning’ to you, but not the converse, and similarity for the others on the list. But are these five distinct actions I performed, one of each type, or are the five descriptions all of a single action, which was of these five (and more) types. Both positions, and a variety of intermediate positions have been defended.

How many words are there within the sentence? : ‘The cat is on the mat’? There are on course, at best two answers to this question, precisely because one can enumerate the word types, either for which there are five, or that which there are six. Moreover, depending on how one chooses to think of word types another answer is possible. Since the sentence contains definite articles, nouns, a preposition and a verb, there are four grammatical different types of word in the sentence.

The type/token distinction, understood as a distinction between sorts of things, particular, the identity theory asserts that mental states are physical states, and this raises the question whether the identity in question if of types or token’.

During the past two decades or so, the concept of supervenience has seen increasing service in philosophy of mind. The thesis that the mental is supervenient on the physical - roughly, the claim that the mental character of a thing is wholly determined by its physical nature - has played a key role in the formulation of some influence on the mind-body problem. Much of our evidence for mind-body supervenience seems to consist in our knowledge of specific correlations between mental states and physical (in particular, neural) processes in humans and other organisms. Such knowledge, although extersive and in some ways impressive, is still quite rudimentary and far from complete (what do we know, or can we expect to know about the exact neural substrate for, say, the sudden thought that you are late with your rent payment this month?) It may be that our willingness to accept mind-body supervenience, although based in part on specific psychological dependencies, has to be supported by a deeper metaphysical commitment to the primary of the physical: It may in fact be an expression of such a commitment.

However, there are kinds of mental state that raise special issues for mind-body supervenience. One such kind is ‘wide content’ states, i.e., contentful mental states that seem to be individuated essentially by reference to objects and events outside the subject, e.g., the notion of a concept, like the related notion of meaning. The word ‘concept’ itself is applied to a bewildering assortment of phenomena commonly thought to be constituents of thought. These include internal mental representations, images, words, stereotypes, senses, properties, reasoning and discrimination abilities, mathematical functions. Given the lack of anything like a settled theory in this area, it would be a mistake to fasten readily on any one of these phenomena as the unproblematic referent of the term. One does better to make a survey of the geography of the area and gain some idea of how these phenomena might fit together, leaving aside for the nonce just which of them deserve to be called ‘concepts’ as ordinarily understood.

Concepts are the constituents of such propositions, just as the words ‘capitalist’, ‘exploit’, and ‘workers’ are constituents of the sentence. However, there is a specific role that concepts are arguably intended to play that may serve a point of departure. Suppose one person thinks that capitalists exploit workers, and another that they do not. Call the thing that they disagree about ‘a proposition’, e.g., capitalists exploit workers. It is in some sense shared by them as the object of their disagreement, and it is expressed by the sentence that follows the verb ‘thinks that’ mental verbs that take such verbs of ‘propositional attitude’. Nonetheless, these people could have these beliefs only if they had, inter alia, the concept’s capitalist exploit. And workers.

Propositional attitudes, and thus concepts, are constitutive of the familiar form of explanation (so-called ‘intentional explanation’) by which we ordinarily explain the behaviour and stares of people, many animals and perhaps, some machines. The concept of intentionality was originally used by medieval scholastic philosophers. It was reintroduced into European philosophy b y the German philosopher and psychologist Franz Clemens Brentano (1838-1917) whose thesis proposed in Brentano’s ‘Psychology from an Empirical Standpoint’(1874) that it is the ‘intentionality or directedness of mental states that marks off the mental from the physical.

Many mental states and activities exhibit the feature of intentionality, being directed at objects. Two related things are meant by this. First, when one desire or believes or hopes, one always desires or believes of hopes something. As, to assume that belief report (1) is true.

(1) That most Canadians believe that George Bush is a Republican.

Tenet (1) tells us that a subject ‘Canadians’ have a certain attitude, belief, to something, designated by the nominal phrase that George Bush is a Republican and identified by its content-sentence.

(2) George Bush is a Republican.

Following Russell and contemporary usage that the object referred to by the that-clause is tenet (1) and expressed by tenet (2) a proposition. Notice, too, that this sentence might also serve as most Canadians’ belief-text, a sentence whereby to express the belief that (1) reports to have. Such an utterance of (2) by itself would assert the truth of the proposition it expresses, but as part of (1) its role is not to assert anything, but to identify what the subject believes. This same proposition can be the object of other attitude s of other people. However, in that most Canadians may regret that Bush is a Republican yet, Reagan may remember that he is. Bushanan may doubt that he is.

Nevertheless, Brentano, 1960, we can focus on two puzzles about the structure of intentional states and activities, an area in which the philosophy of mind meets the philosophy of language, logic and ontology, least of mention, the term intentionality should not be confused with terms intention and intension. There is, nonetheless, an important connection between intention and intension and intentionality, for semantical systems, like extensional model theory, that are limited to extensions, cannot provide plausible accounts of the language of intentionality.

The attitudes are philosophically puzzling because it is not easy to see how the intentionality of the attitude fits with another conception of them, as local mental phenomena.

Beliefs, desires, hopes, and fears seem to be located in the heads or minds of the people that have them. Our attitudes are accessible to us through ‘introspection’. As most Canadians belief that Bush to be a Republican just by examining the ‘contents’ of his own mind: He does not need to investigate the world around him. we think of attitudes as being caused at certain times by events that impinge on the subject’s body, specially by perceptual events, such as reading a newspaper or seeing a picture of an ice-cream cone. In that, the psychological level of descriptions carries with it a mode of explanation which has no echo in ‘physical theory’. we regard ourselves and of each other as ‘rational purposive creatures, fitting our beliefs to the world as we inherently perceive it and seeking to obtain what we desire in the light of them’. Reason-giving explanations can be offered not only for action and beliefs, which will gain most attention, however, desires, intentions, hopes, dears, angers, and affections, and so forth. Indeed, their positioning within a network of rationalizing links is part of the individuating characteristics of this range of psychological states and the intentional acts they explain.

Meanwhile, these attitudes can in turn cause changes in other mental phenomena, and eventually in the observable behaviour of the subject. Seeing a picture of an ice cream cone leads to a desire for one, which leads me to forget the meeting I am supposed to attend and walk to the ice-cream pallor instead. All of this seems to require that attitudes be states and activities that are localized in the subject.

Nonetheless, the phenomena of intentionality suggests that the attitudes are essentially relational in nature: They involve relations to the propositions at which they are directed and at the objects they are about. These objects may be quite remote from the minds of subjects. An attitude seems to be individuated by the agent, the type of attitude (belief, desire, and so on), and the proposition at which it is directed. It seems essential to the attitude reported by its believing that, for example, that it is directed toward the proposition that Bush is a Republican. And it seems essential to this proposition that it is about Bush. But how can a mental state or activity of a person essentially involve some other individuals? The problem is brought out by two classical problems such that are called ‘no-reference’ and ‘co-reference’.

The classical solution to such problems is to suppose that intentional states are only indirectly related to concrete particulars, like George Bush, whose existence is contingent, and that can be thought about in a variety of ways. The attitudes directly involve abstract objects of some sort, whose existence is necessary, and whose nature the mind can directly grasp. These abstract objects provide concepts or ways of thinking of concrete particulars. That is to say, the involving characteristics of the different concepts, as, these, concepts corresponding to different inferential/practical roles in that different perceptions and memories give rise to these beliefs, and they serve as reasons for different actions. If we individuate propositions by concepts than individuals, the co-reference problem disappears.

The proposal has the bonus of also taking care of the no-reference problem. Some propositions will contain concepts that are not, in fact, of anything. These propositions can still be believed desired, and the like.

This basic idea has been worked out in different ways by a number of authors. The Austrian philosopher Ernst Mally thought that propositions involved abstract particulars that ‘encoded’ properties, like being the loser of the 1992 election, rather than concrete particulars, like Bush, who exemplified them. There are abstract particulars that encode clusters of properties that nothing exemplifies, and two abstract objects can encode different clusters of properties that are exemplified by a single thing. The German philosopher Gottlob Frége distinguished between the ‘sense’ and the ‘reference’ of expressions. The senses of George Bus hh and the person who will come in second in the election are different, even though the references are the same. Senses are grasped by the mind, are directly involved in propositions, and incorporate ‘modes of presentation’ of objects.

For most of the twentieth century, the most influential approach was that of the British philosopher Bertrand Russell. Russell (19051929) in effect recognized two kinds of propositions that assemble of a ‘singular proposition’ that consists separately in particular to properties in relation to that. An example is a proposition consisting of Bush and the properties of being a Republican. ‘General propositions’ involve only universals. The general proposition corresponding to someone is a Republican would be a complex consisting of the property of being a Republican and the higher-order property of being instantiated. The term ‘singular proposition’ and ‘general proposition’ are from Kaplan (1989.)

Historically, a great deal has been asked of concepts. As shareable constituents of the object of attitudes, they presumably figure in cognitive generalizations and explanations of animals’ capacities and behaviour. They are also presumed to serve as the meaning of linguistic items, underwriting relations of translation, definition, synonymy, antinomy and semantic implication. Much work in the semantics of natural language takes itself to be addressing conceptual structure.

Concepts have also been thought to be the proper objects of philosophical analysis, the activity practised by Socrates and twentieth-century ‘analytic’ philosophers when they ask about the nature of justice, knowledge or piety, and expect to discover answers by means of priori reflection alone.

The expectation that one sort of thing could serve all these tasks went hand in hand with what has come to be known for the ‘Classical View’ of concepts, according to which they have an ‘analysis’ consisting of conditions that are individually necessary and jointly sufficient for their satisfaction. Which are known to any competent user of them? The standard example is the especially simple one of the [bachelor], which seems to be identical to [eligible unmarried male]. A more interesting, but problematic one has been [knowledge], whose analysis was traditionally thought to be [justified true belief].

This Classical View seems to offer an illuminating answer to a certain form of metaphysical question: In virtue of what is something the kind of thing is, -, e.g., in virtue of what is a bachelor a bachelor? - and it does so in a way that supports counterfactuals: It tells us what would satisfy the concept in situations other than the actual ones (although all actual bachelors might turn out to be freckled. It’s possible that there might be unfreckled ones, since the analysis does not exclude that). The View also seems to offer an answer to an epistemological question of how people seem to know a priori (or, independently of experience) about the nature of many things, e.g., that bachelors are unmarried: It is constitutive of the competency (or, possession) conditions of a concept that they know its analysis, at least on reflection.

As it had been ascribed, in that Actions as Doings having Mentalistic Explanation: Coughing is sometimes like snoring and sometimes like saying ‘Good morning’ - that is, sometimes in mere doing and sometimes an action. And deliberate coughing can be explained by invoking an intention to cough, a desired to cough or some other ‘pro-attitude’ toward coughing, a reason for coughing or purpose in coughing or something similarly mental. Especially if we think of actions as ‘outputs’ of the mental machine’. The functionalist thinks of ‘mental states’ as events as causally mediating between a subject’s sensory inputs and the subject’s ensuing behaviour. Functionalism itself is the stronger doctrine that ‘what makes’ a mental state the type of state it is - a pain, a smell of violets, a belief that koalas are dangerous - are the functional relations it bears to the subject’s perceptual stimuli, behaviour responses and other mental states.

Twentieth-century functionalism gained as credibility in an indirect way, by being perceived as affording the least objectionable solution to the mind-body problem.

Disaffected from Cartesian dualism and from the ‘first-person’ perspective of introspective psychology, the behaviourists had claimed that there is nothing to the mind but the subject’s behaviour and dispositions to behave. To refute the view that a certain level of behavioural dispositions is necessary for a mental life, we need convincing cases of thinking stones, or utterly incurable paralytics or disembodied minds. But these alleged possibilities are to some merely that.

To refute the view that a certain level of behavioural dispositions is sufficient for a mental life, we need convincing cases rich behaviour with no accompanying mental states. The typical example is of a puppet controlled by radio-wave links, by other minds outside the puppet’s hollow body. But one might wonder whether the dramatic devices are producing the anti-behaviorist intuition all by themselves. And how could the dramatic devices make a difference to the facts of the casse? If the puppeteers were replaced by a machine, not designed by anyone, yet storing a vast number of input-output conditionals, which was reduced in size and placed in the puppet’s head, do we still have a compelling counterexample, to the behaviour-as-sufficient view? At least it is not so clear.

Such an example would work equally well against the anti-eliminativist version of which the view that mental states supervene on behavioural disposition. But supervenient behaviourism could be refitted by something less ambitious. The ‘X-worlders’ of the American philosopher Hilary Putnam (1926-), who are in intense pain but do not betray this in their verbal or non-verbal behaviour, behaving just as pain-free human beings, would be the right sort of case. However, even if Putnam has produced a counterexample for pain - which the American philosopher of mind Daniel Clement Dennett (1942-), for one would doubtless deny - an ‘X-worlder’ story to refute supervenient behaviourism with respect to the attitudes or linguistic meaning will be less intuitively convincing. Behaviourist resistance is easier for the reason that having a belief or meaning a certain thing, lack distinctive phenomemologies.

There is a more sophisticated line of attack. As, the most influential American philosopher of the latter half of the 20th century philosopher Willard von Orman Quine (1908-2000) has remarked some have taken his thesis of the indeterminacy of translation as a reductio of his behaviourism. For this to be convincing, Quines argument for the indeterminacy thesis and to be persuasive in its own and that is a disputed matter.

If behaviourism is finally laid to rest to the satisfaction of most philosophers, it will probably not by counterexamples, or by a reductio from Quine’s indeterminacy thesis. Rather, it will be because the behaviorists worries about other minds, and the public availability of meaning have been shown to groundless, or not to require behaviourism for their solution. But we can be sure that this happy day will take some time to arrive.

Quine became noted for his claim that the way one uses’ language determines what kinds of things one is committed to saying exist. Moreover, the justification for speaking one way rather than another, just as the justification for adopting one conceptual system rather than another, was a thoroughly pragmatic one for Quine (see Pragmatism). He also became known for his criticism of the traditional distinction between synthetic statements (empirical, or factual, propositions) and analytic statements (necessarily true propositions). Quine made major contributions in set theory, a branch of mathematical logic concerned with the relationship between classes. His published works include Mathematical Logic (1940), From a Logical Point of View (1953), Word and Object (1960), Set Theory and Its Logic (1963), and: An Intermittently Philosophical Dictionary (1987). His autobiography, The Time of My Life, appeared in 1985.

Functionalism, and cognitive psychology considered as a complete theory of human thought, inherited some of the same difficulties that earlier beset behaviouralism and identity theory. These remaining obstacles fall unto two main categories: Intentionality problems and Qualia problems.

Propositional attitudes such as beliefs and desires are directed upon states of affairs which may or may not actually obtain, e.g., that the Republican or let alone any in the Liberal party will win, and are about individuals who may or may not exist, e.g., King Arthur. Franz Brentano raised the question of how are purely physical entity or state could have the property of being ‘directed upon’ or about a non-existent state of affairs or object: That is not the sort of feature that ordinary, purely physical objects can have.

The standard functionalist reply is that propositional attitudes have Brentano’s feature because the internal physical states and events that realize them ‘represent’ actual or possible states of affairs. What they represent is determined at least in part, by their functional roles: Is that, mental events, states or processes with content involve reference to objects, properties or relations, such as a mental state with content can fail to refer, but there always exists a specific condition for a state with content to refer to certain things? As when the state gas a correctness or fulfilment condition, its correctness is determined by whether its referents have the properties the content specifies for them.

What is it that distinguishes items that serve as representations from other objects or events? And what distinguishes the various kinds of symbols from each other? Firstly, there has been general agreement that the basic notion of a representation involves one thing’s ‘standing for’, ‘being about’, ‘pertain to’, ‘referring or denoting of something else entirely’. The major debates here have been over the nature of this connection between a representation and that which it represents. As to the second, perhaps the most famous and extensive attempt to organize and differentiated among alternative forms of the representation is found in the works of C.S. Peirce (1931-1935). Peirce’s theory of sign in complex, involving a number of concepts and distinctions that are no longer paid much heed. The aspect of his theory that remains influential and is widely cited, is his division of signs into Icons, Indices and Symbols. Icons are signs that are said to be like or resemble the things they represent, e.g., portrait paintings. Indices are signs that are connected to their objects by some causal dependency, e.g., smoke as a sign of fire. Symbols are those signs that are related to their object by virtue of use or association: They are arbitrary labels, e.g., the word ‘table’. The divisions among signs, or variants of this division, is routinely put forth to explain differences in the way representational systems are thought to establish their links to the world. Further, placing a representation in one of the three divisions has been used to account for the supposed differences between conventional and non-conventional representation, between representation that do and do not require learning to understand, and between representations, like language, that need to be read, and those which do not require interpretation. Some theorists, moreover, have maintained that it is only the use of Symbols that exhibits or indicate s the presence of mind and mental states.

Representations, along with mental states, especially beliefs and thoughts, are said to exhibit ‘intentionality’ in that they refer or to stand for something else. The nature of this special property, however, has seemed puzzling. Not only is intentionality often assumed to be limited to humans, and possibly a few other species, but the property itself appears to resist characterization in physicalist terms. The problem is most obvious in the case of ‘arbitrary’ signs, like words. Where it is clear that there is no connection between the physical properties of as word and what it denotes, that, wherein, the problem also remains for Iconic representation.

In at least, there are two difficulties. One is that of saying exactly ‘how’ a physical item’s representational content is determined, in not by the virtue of what does a neurophysiological state represent precisely that the available candidate will win? An answer to that general question is what the American philosopher of mind, Alan Jerry Fodor (1935-) has called a ‘psychosemantics’, and several attempts have been made. Taking the analogy between thought and computation seriously, Fodor believes that mental representations should be conceived as individual states with their own identities and structures, like formulae transformed by processes of computations or thought. His views are frequently contrasted with those of ‘holiest’ such as the American philosopher Herbert Donald Davidson (1917-2003), whose constructions within a generally ‘holistic’ theory of knowledge and meaning. Radical interpreter can tell when a subject holds a sentence true, and using the principle of ‘clarity’ ends up making an assignment of truth condition is a defender of radical translation and the inscrutability of reference’, Holist approach has seemed to many has seemed to many to offer some hope of identifying meaning as a respectable notion, eve n within a broadly ‘extensional’ approach to language. Instructionalists about mental ascription, such as Clement Daniel Dennett (19420) who posits the particularity that Dennett has also been a major force in illuminating how the philosophy of mind needs to be informed by work in surrounding sciences.

In giving an account of what someone believes, does essential reference have to be made to how things are in the environment of the believer? And, if so, exactly what reflation does the environment have to the belief? These questions involve taking sides in the externalism and internalism debate. To a first approximation, the externalist holds that one’s propositional attitude cannot be characterized without reference to the disposition of object and properties in the world - the environment - in which in is simulated. The internalist thinks that propositional attitudes (especially belief) must be characterizable without such reference. The reason that this is only a first approximation of the contrast is that there can be different sorts of externalism. Thus, one sort of externalist might insist that you could not have, say, a belief that grass is green unless it could be shown that there was some relation between you, the believer, and grass. Had you never come across the plant which makes up lawns and meadows, beliefs about grass would not be available to you. However, this does not mean that you have to be in the presence of grass in order to entertain a belief about it, nor does it even mean that there was necessarily a time when you were in its presence. For example, it might have been the case that, though you have never seen grass, it has been described to you. Or, at the extreme, perhaps no longer exists anywhere in the environment, but you ancestor’s contact with it left some sort of genetic trace in you, and the trace is sufficient to give rise to a mental state that could be characterized as about grass.

At the more specific level that has been the focus in recent years: What do thoughts have in common in virtue of which they are thoughts? What is, what makes a thought a thought? What makes a pain a pain? Cartesian dualism said the ultimate nature of the mental was to be found in a special mental substance. Behaviourism identified mental states with behavioural disposition: Physicalism in its most influential version identifies mental states with brain states. One could imagine that the individual states that occupy the relevant causal roles turn out not to be bodily stares: For example, they might instead be states of an Cartesian unextended substance. But its overwhelming likely that the states that do occupy those causal roles are all tokens of bodily-state types. However, a problem does seem to arise about properties of mental states. Suppose ‘pain’ is identical with a certain firing of c-fibres. Although a particular pain is the very same state as a neural firing, we identify that state in two different ways: As a pain and as a neural firing. The state will therefore have certain properties in virtue of which we identify it as a pain and others in virtue of which we identify it as a pain will be mental properties, whereas those in virtue of which we identify it as a neural firing will be physical properties. This has seemed to many to lead to a kind of dualism at the level of the properties of mental states. Even if we reject a dualism of substances and take people simply to be physical organisms, those organisms still have both mental and physical states. Similarly, even if we identify those mental states with certain physical states, those states will nonetheless have both mental and physical properties. So, disallowing dualism with respect to substances and their stares simply leads to its reappearance at the level of the properties of those states.

The problem concerning mental properties is widely thought to be most pressing for sensations, since the painful quality of pains and the red quality of visual sensation seem to be irretrievably physical. So, even if mental states are all identical with physical states, these states appear to have properties that are not physical. And if mental states do actually have non-physical properties, the identity of mental with physical states would not sustain a thoroughgoing mind-body materialism.

A more sophisticated reply to the difficulty about mental properties is due independently to D.M. Armstrong (1968) and David Lewis (1972), who argue that for a state to be a particular sort of intentional state or sensation is for that state to bear characteristic causal relations to other particular occurrences. The properties in virtue of which we identify states as thoughts or sensations will still be neutral as between being mental or physical, since anything can bear a causal relation to anything else. But causal connections have a better chance than similarity in some unspecified respect t of capturing the distinguishing properties of sensation and thoughts.

The way we talk about sensory experience certainly suggests an act/object view. When something looks thus and so in the phenomenological sense, we naturally describe the nature of our sensory experience by saying that we are acquainted with a thus and so ‘given’. But suppose that this is a misleading grammatical appearance, engendered by the linguistic propriety of forming complete, putatively referring expressions like ‘the bent shape on my visual field’, and that there is no more a bent shape in existence for the representative realist to contend to be a mental sense-data, than there is a bad limp in existence when someone has, as we say, a bad limp. When someone has a bad limo, they limp badly, similarly, according to adverbial theorist, when, as we naturally put it, I am aware of a bent shape, we would better express the way things are by saying that I sense bent shape-ly. When the act/object theorist analyses as a feature of the object which gives the nature of the sensory experience, the adverbial theorist analyses as a mode of sense which gives the nature of the sensory experience. (The decision between the act/object and adverbial theories is a hard one.)

In the best-known form the adverbial theory of experience proposes that the grammatical object of a statement attributing an experience to someone be analysed as an adverb. For example,

(1) Rod is experiencing a pink square

is rewritten as?

Rod is experiencing (pink square)-ly

This is presented as an alterative to the act/object analysis, according to which the truth of a statement like (1) requires the existence of an object of experience corresponding to its grammatical object. A commitment to the explicit adverbialization of statements of experience is not, however, essential to adverbialism. The core of the theory consisted, rather, in the denial of objects of experience, as opposed to objects of perception, and coupled with the view that the role of the grammatical object is a statement of experience is to characterize more fully the sort of experience which is being attributed to the subject. The claim, then, is that the grammatical object is functioning as a modifier, and, in particular, as a modifier of a verb. If this is so, it is perhaps appropriate to regard it as a special kind of adverb at the semantic level.

Nonetheless, in the arranging accordance to the act/object analysis of experience, every experience with content involves an object of experience to which the subject is related by an act of awareness in the event of experiencing that object. Such as these experiences are, it is, nonetheless. The experiences are supposed to be whatever it is that they represent. Act, object theorist may differ on the nature of objects of experience, which h have been treated as properties. However, and, more commonly, private mental objects in which may not exist have any form of being, and, with sensory qualifies the experiencing imagination may walk upon the corpses of times’ generations, but this has also been used as a unique application to is mosaic structure in its terms for objects of sensory experience or the equivalence of the imaginations striving from the mental act as presented by the object and forwarded by and through the imaginistic thoughts that are released of a vexing imagination. Finally, in the terms of representative realism, objects of perception of which we are ‘directly aware’, as the plexuity in the abstract objects of perception exist if objects of experience.

As the aforementioned, traditionally representative realism is allied with the act/object theory. But we can approach the debate or by rhetorical discourse as meant within dialectic awareness, for which representative realism and direct realism are achieved by the mental act in abdication to some notion of regard or perhaps, happiness, all of which the prompted excitations of the notion expels or extractions of information processing. Mackie (1976( argues that Locke (1632-1704) can be read as approaching the debate ion television. My senses, in particular my eyes and ears, ‘tell’ me that Carlton is winning. What makes this possible is the existence of a long and complex causal chain of electro-magnetic radiation from the game through the television cameras, various cables between my eyes and the television screen. Each stage of this process carries information about preceding stages in the sense that the way things are at a given stage depends on the way things are at preceding stages. Otherwise the information would not be transferred from the game to my brain. There needs to be a systematic covariance between the state of my brain and the state unless it obtains between intermediate members of the long causal chain. For instance, if the state of my retina did not systematically remit or consign with the state of the television screen before me, my optic nerve would have, so to speak, nothing to go on to tell my brain about the screen, and so in turn would have nothing to go on to tell my brain about the game. There is no information at a distance’.

A few of the stages in this transmission of information between game and brain are perceptually aware of them. Much of what happens between brain and match I am quite ignorant about, some of what happens I know about from books, but some of what happens I am perceptually aware of the images on the scree. I am also perceptually aware of the game. Otherwise I could not be said to watch the game on television. Now my perceptual awareness of the match depends on my perceptual awareness of the screen. The former goes by means of the latter. In saying this I am not saying that I go through some sort of internal monologue like ‘Such and such images on the screen are moving thus and thus. Therefore, Carlton is attacking the goal’. Indeed, if you suddenly covered the screen with a cloth and asked me (1) to report on the images, and (2) to report in the game. I might well find it easier to report on the game than on the images. But that does not mean that my awareness of the game does not go by way of my awareness of the images on the screen. The shows that I am more interested in the game than in the screen, and so am storing beliefs about it in preference e to beliefs about the screen.

We can now see how elucidated representative realism independently of the debate between act/object and adverbial theorists about sensory experience. Our initial statement of representative realism talked of the information acquired in perceiving an object being most immediately about the perceptual experience caused in us by the object, and only derivatively about objects itself, in the act/object, sense-data approach, what is held to make that true is that the fact that what we are immediately aware of it’s mental sense-datum. But instead, representative realists can put their view this way: Just as awareness of the match game by means of awareness of the screen, so awareness of the screen foes by way of awareness of experience., and in general when subjects perceive objects, their perceptual awareness always does by means of the awareness of experience.

Why believe such a view? Because of the point that was inferred earlier: The worldly provision by our senses is so very different from any picture provided by modern science. It is so different in fact that it is hard to grasp what might be meant by insisting that we are in epistemologically direct contact with the world.

An argument from illusion is usually intended to establish that certain familia r facts about illusion disprove the theory of perception and called naïve or direct realism. There are,. However, many different versions of the argument which must be distinguished carefully. Some of these premisses (the nature of the appeal to illusion):Others centre on the interpretation of the conclusion (the kind of direct realism under attack). In distinguishing important differences in the versions of direct realism. One might be taken to be vulnerable to familiar facts about the possibility of perceptual illusion.

A crude statement of direct realism would concede to the connection with perception, such that we sometimes directly perceive physical objects and their properties: We do not always perceive physical objects by perceiving something else, e.g., a sense-data. There are, however, difficulties with this formulation of the view. For one thing a great many philosophers who are not direct realists would admit that it is a mistake to describe people as actually perceiving something other than a physical object. In particular, such philosophers might admit, we should never say that we perceive sense-data. To talk that way would be to suppose that we should model our understanding of our relationship to sense-data on our understanding of the ordinary use of perceptual verbs as they describe our relation to the physical world, and that is the last thing paradigm sense-data theorists had better want. At least, many of the philosophers who objected to direct realism would prefer to express what they were objecting to in terms of a technical and philosophical controversial concept such as acquaintance. Using such a notion we could define direct realism this way: In veridical experience we are directly acquainted with parts, e.g., surfaces, or constituents of physical objects. A less cautious version of the view might drop the reference to veridical experience and claim simply that in all parts or constituents of physical objects.

We know things by experiencing them, and knowledge of acquaintance. (Russell changed the preposition to ’by’) is epistemically prior to and has a relatively higher degree of epistemic justification than knowledge about things. Indeed, sensation has ‘the one great value of trueness or freedom from mistake’.

A thought (using that term broadly, to mean any mental state) constituting knowledge of acquaintance with thing is more or less causally proximate to sensations caused by that thing is more or less distant causal y, being separated from the thing and experience of it by processes of attention and inference. At the limit, if a thought is maximally of the acquaintance type, it is the first mental state occurring in a object to which the thought refers, i.e., it is a sensation. The things we have knowledge of acquaintance e include ordinary objects in the external world, such as the Sun.

Grote contrasted the imaginistic thoughts involved in knowledge of acquaintance with things, with the judgements involved in knowledge about things, suggesting that the latter but not the former are contentful mental states. Elsewhere, however, he suggested that every thought capable of constituting knowledge of or about a thing involves a form, idea, or what we might call conceptual propositional content, referring the thought to its object. Whether contentful or not, thoughts constituting knowledge of acquaintance with a thing as r relatively indistinct, although this indistinctness does not imply incommunicability. Yet, thoughts constituting knowledge about a thing are relatively distinct, as a result of ‘the application of notice or attention’ to the ‘confusion or chaos’ of sensation. Grote did not have an explicit theory of reference e, the relation by which a thought of or about a specific thing. Nor did he explain how thoughts can be more or less indistinct.

Helmholtz (1821-94) held unequivocally that all thoughts capable of constituting knowledge, whether ‘knowledge e which has to do with notions’ or ‘mere familiarity with phenomena’ are judgements or, we may say, have conceptual propositional contents. Where Grote saw a difference e between distinct and indistinct thoughts. Helmholtz found a difference between precise judgements which are expressible in words and equally precise judgement which, in principle, are not expressible in words, and so are not communicable.

James (1842-1910), however, made a genuine advance over Grote and Helmholtz by analysing the reference relations holding between a thought and the specific thing of or about which it is knowledge. In fact, he gave two different analyses. On both analyses, a thought constituting knowledge about a thing refers to and is knowledge about ‘a reality, whenever it actually or potentially terminates in’ a thought constituting knowledge of acquaintance with that thing. The two analyses differ in their treatments of knowledge of acquaintance. On James’s first analyses, reference in both sorts of knowledge is mediated by causal chains. A thought constituting pure knowledge of acquaintance with a thing refers to and is knowledge of ‘whatever reality it directly or indirectly operates on and resembles’. The concepts of a thought ‘operating in’ a thing or ‘terminating in’ another thought are causal, but where Grote found chains of efficient causation connecting thought and referent. James found teleology and final causes. On James’s later analysis, the reference involved in knowledge of acquainting e with a thing is direct. A thought constituting knowledge of acquaintance with a thing as a constituent and the thing and the experience of it are identical.

James further agreed with Grote that pure knowledge of acquaintance with things, eg., sensory experience, is epistemically prior to knowledge about things. While the epistemic justification involved in knowledge about all thoughts about things are fallible and their justification is augmented by their mutual coherence. James was unclear about the precise epistemic status of knowledge of acquaintance. At times, thoughts constituting pure knowledge of acquaintance are said to posses ‘absolute veritableness’ and ‘the maximal conceivable truth’, suggesting that such thoughts are genuinely cognitive and that they provide an infallible epistemic foundation. At other times, such thoughts are said not to bear truth-values, suggesting that ‘knowledge’ of acquaintance is not genuine knowledge at all, but only a non-cognitive necessary condition of genuine knowledge, that is to say, the knowledge about things.

What is more, that, Russell (1872-1970) agreed with James that knowledge of things by acquaintance ‘is essentially simpler than any knowledge of truths, and logically independent of knowledge of truth’. That the mental states involved when one is acquainted with things do not have propositional contents. Russell’s reasons were to seem as having been similar to James’s. Conceptually unmediated reference to particulars is necessary for understanding any proposition mentioning a particular and, if scepticism about the external world is to be avoided, some particulars must be directly perceived. Russell vacillated about whether or not the absence of propositional content renders knowledge by acquaintance incommunicable.

Russell agreed with James that different accounts should be given of reference as it occurs in knowledge by acquaintance and in knowledge about things, and that in the former case reference is direct. But, Russell objected on the number of grounds to James’s causal account of the indirect reference involved in knowledge about things. Russell gave a descriptional rather than a causal analysis of that sort of reference. A thought is about a thing when the content of the thought involves a definite description uniquely satisfied by the thing referred to. Yet, he preferred to speak of knowledge of things by description, than of knowledge about things.

Russell advanced beyond Grote and James by explaining how thoughts can be more or less articulate and explicit. If one is acquainted with a complex thing without being aware of or acquainted with its complexity, the knowledge one has by acquaintance e with that thing is vague and inexplicit. Reflection and analysis can lead to distinguish constituent parts of the object of acquaintance and to obtain progressively more distinct, explicit, and complete knowledge about it.

Because one can interpret the reflation of acquaintance or awareness as one that is not epistemic, i.e., not a kind of propositional knowledge, it is important to distinguish the views read as ontological theses from a view one might call epistemological direct realism: In perception we are, on, at least some occasions, non-inferentially justified in believing a proposition asserting the existence e of a physical object. A view about what the object of perceptions are. Direct realism is a type of realism, since it is assumed that these objects exist independently of any mind that might perceive them: And so it thereby rules out all forms of idealism and phenomenalism, which holds that there are no such independently existing objects. Its being a ‘direct realism rules out those views’ defended under the rubic of ‘critical realism’, of ‘representative realism’, in which there is some non-physical intermediary - usually called a ‘sense-data’ or a ‘sense impression’ - that must first be perceived or experienced in order to perceive the object that exists independently of this perception. According to critical realists, such an intermediary need not be perceived ‘first’ in a temporal sense, but it is a necessary ingredient which suggests to the perceiver an external reality, or which offers the occasion on which to infer the existence of such a reality. Direct realism, however, denies the need for any recourse to mental go-between in order to explain our perception of the physical world.

This reply on the part of the direct realist does not, of course, serve to refute the global sceptic, who claims that, since our perceptual experience could be just as it is without there being any real properties at all, we have no knowledge of any such properties. But no view of perception alone is sufficient to refute such global scepticism. For such a refutation we must go beyond a theory that claims how best to explain our perception of physical objects, and defend a theory that best explains how we obtain knowledge of the world.

All is the equivalent for an external world, as philosophers have used the term, is not some distant planet external to Earth. Nor is the external world, strictly speaking, a world. Rather, the external world consists of all those objects and events which exist external to perceiver. So the table across the room is part of the external world, and so is the room in part of the external world, and so is its brown colour and roughly rectangular shape. Similarly, if the table falls apart when a heavy object is placed on it, the event of its disintegration is a pat of the external world.

One object external to and distinct from any given perceiver is any other perceiver. So, relative to one perceiver, every other perceiver is a part of the external world. However, another way of understanding the external world results if we think of the objects and events external to and distinct from every perceiver. So conceived the set of all perceivers makes up a vast community, with all of the objects and events external to that community making up the external world. Thus, our primary considerations are in the concern from which we will suppose that perceiver are entities which occupy physical space, if only because they are partly composed of items which take up physical space.

What, then, is the problem of the external world. Certainly it is not whether there is an external world, this much is taken for granted. Instead, the problem is an epistemological one which, in rough approximation, can be formulated by asking whether and if so how a person gains of the external world. So understood, the problem seems to admit of an easy solution. Thee is knowledge of the external world which persons acquire primarily by perceiving objects and events which make up the external world.

However, many philosophers have found this easy solution problematic. Nonetheless, the very statement of ‘the problem of the external world itself’ will be altered once we consider the main thesis against the easy solution.

One way in which the easy solution has been further articulated is in terms of epistemological direct realism. This theory is the realist insofar as it claims that objects and events in the external world, along with many of their various features, exist independently of and are generally unaffected by perceivers and acts of perception in which they engage. And this theory is epistemologically direct since it also claims that in perception people often, and typically acquire immediate non-inferential knowledge of objects and events in the external world. It is on this latter point that it is thought to face serious problems.

The main reason for this is that knowledge of objects in the external world seems to be dependent on some other knowledge, and so would not qualify as immediate and non-inferentially is claimed that I do not gain immediate non-inferential perceptual knowledge that thee is a brown and rectangular table before me, because I would know such a proposition unless I knew that something then appeared brown and rectangular. Hence, knowledge of the table is dependent upon knowledge of how it appears. Alternately expressed, if there is knowledge of the table at all, it is indirect knowledge, secured only if the proposition about the table may be inferred from propositions about appearances. If so, epistemological direct realism is false’

This argument suggests a new way of formulating the problem of the external world:

Problem of the external world: Can firstly, have knowledge?

of propositions about objects and events in the external

world based on or upon propositions which describe how

the external world appears, i.e., upon appearances?

Unlike our original formulation of the problem of the external world, this formulation does not admit of an easy solution. Instead, it has seemed to many philosophers that it admits of no solution at all, so that scepticism regarding the eternal world is only remaining alternative.

This theory is realist in just the way described earlier, but it adds, secondly, that objects and events in the external world are typically directly perceived, as are many of their features such as their colour, shapes, and textures.

Often perceptual direct realism is developed further by simply adding epistemological direct realism to it. Such an addition is supported by claiming that direct perception of objects in the external world provides us with immediate non-referential knowledge of such objects. Seen in this way, perceptual direct realism is supposed to support epistemological direct realism, strictly speaking they are independent doctrines. One might consistently, perhaps even plausibly, hold one without also accepting the other.

Direct perception is that perception which is not dependent on some other perception. The main opposition to the claim that we directly perceive external objects comes from direct or representative realism. That theory holds that whenever an object in the external world is perceived, some other object is also perceived, namely a sensum - a phenomenal entity of some sort. Further, one would not perceive the external object if one would not perceive the external object if one were to fail to receive the sensum. In this sense the sensum is a perceived intermediary, and the perception of the external object is dependent on the perception of the sensum. For such a theory, perception of the sensum is direct, since it is not dependent on some other perception, while perception on the external object is indirect. More generally, for the indirect t realism., all directly perceived entities are sensum. On the other hand, those who accept perceptual direct realism claim that perception of objects in the external world is typically direct, since that perception is not dependent on some perceived intermediaries such as sensum.

It has often been supposed, however, that the argument from illusion suffices to refute all forms of perceptual direct realism. The argument from illusion is actually a family of different arguments rather than one argument. Perhaps the most familiar argument in this family begins by noting that objects appear differently to different observers, and even to the same observers on different occasions or in different circumstances. For example, a round dish may appear round to a person viewing it from directly above and elliptical to another viewing it from one side. As one changes position the dish will appear to have still different shapes, more and more elliptical in some cases, closer and closer to round in others. In each such case, it is argued, the observer directly sees an entity with that apparent shape. Thus, when the dish appears elliptical, the observer is said to see directly something which is elliptical. Certainly this elliptical entity is not the top surface of the dish, since that is round. This elliptical entity, a sensum, is thought to be wholly distinct from the dish.

In seeing the dish from straight above it appears round and it might be thought that then directly sees the dish rather than a sensum. But here too, it relatively sett in: The dish will appear different in size as one is placed at different distances from the dish. So even if in all of these cases the dish appears round, it will; also appear to have many different diameters. Hence, in these cases as well, the observer is said to directly see some sensum, and not the dish.

This argument concerning the dish can be generalized in two ways. First, more or less the same argument can be mounted for all other cases of seeing and across the full range of sensible qualities - textures and colours in addition to shapes and sizes. Second, one can utilize related relativity arguments for other sense modalities. With the argument thus completed, one will have reached the conclusion that all cases of non-hallucinatory perception, the observer directly perceives a sensum, and not an external physical object. Presumably in cases of hallucination a related result holds, so that one reaches the fully general result that in all cases of perceptual experience, what is directly perceived is a sensum or group of sensa, and not an external physical object, perceptual direct realism, therefore, is deemed false.

Yet, even if perceptual direct realism is refuted, this by itself does not generate a problem of the external world. We need to add that if no person ever directly perceives an external physical object, then no person ever gains immediate non-inferential knowledge of such objects. Armed with this additional premise, we can conclude that if there is knowledge of external objects, it is indirect and based upon immediate knowledge of sensa. We can then formulate the problem of the external world in another way:

Problems of the external world: can, secondly?

have knowledge of propositions about objects and\

events in the external world based upon propositions

about directly perceived sensa?

It is worth nothing the differences between the problems of the external world as expounded upon its first premise and the secondly proposing comments as listed of the problems of the external world, we may, perhaps, that we have knowledge of the external world only if propositions about objects and events in the external world that are inferrable from propositions about appearances.

Some philosophers have thought that if analytical phenomenalism were true, the situational causalities would be different. Analytic phenomenalism is the doctrine that every proposition about objects and events in the external world is fully analysable into, and thus is equivalent in meaning to, a group of inferrable propositions. The numbers of inferrable propositions making up the analysis in any single propositioned object and or event in the external world would likely be enormous, perhaps, indefinitely many. Nevertheless, analytic phenomenalism might be of help in solving the perceptual direct realism of which the required deductions propositioned about objects and or events in the external world from those that are inferrable from prepositions about appearances. For, given analytical phenomenalism there are indefinite many in the inferrable propositions about appearances in the analysis of each proposition taken about objects and or events in the external world is apt to be inductive, even granting the truth of a analytical phenomenalism. Moreover, most of the inferrable propositions about appearances into which we might hope to analyse of the external world, then we have knowledge of the external world only if propositions about objects and events in the external world would be complex subjunctive conditionals such as that expressed by ‘If I were to seem to see something red, round and spherical, and if I were to seem to try to taste what I seem to see, then most likely I would seem to taste something sweet and slightly tart’. But propositionally inferrable appearances of this complex sort will not typically be immediately known. And thus knowledge of propositional objects and or event of the external world will not generally be based on or upon immediate knowledge of such propositionally making appearances.

Consider upon the appearances expressed by ‘I seem to see something red, round, and spherical’ and ‘I seem to taste something sweet and slightly tart’. To infer cogently from these propositions to that expressed by ‘There is an apple before me’ we need additional information, such as that expressed by ‘Apples generally cause visual appearance of redness, roundness, and spherical shape and gustatory appearance of sweetness and tartness’. With this additional information., the inference is a good on e, and it is likely to be true that there is an apple there relative to those premiered. The cogency of the inference, however, depends squarely on the additional premise, relative only to the stated inferrability placed upon appearances, it is not highly probable that thee is an apple there.

Moreover, there is good reason to think that analytic phenomenalism is false. For each proposed translation of an object and eventfully external world into the inferrable propositions about appearances. Mainly enumerative induction is of no help in this regard, for that is an inference from premisses about observed objects in a certain set-class having some properties ‘F’ and ‘G’ to unobserved objects in the same set-class having properties ‘F’ and ‘G’, to unobserved objects in the same set-class properties ‘F’ and ‘G’. If satisfactory, then we have knowledge of the external world if propositions are inferrable from propositions about appearances, however, concerned considerations drawn upon appearances while objects and or events of the external world concern for externalities of objects and interactive categories in events, are. So, the most likely inductive inference to consider is a causal one: We infer from certain effects, described by promotional appearances to their likely causes, described by external objects and or event that profited emanation in the concerning propositional state in that they occur. But, here, too, the inference is apt to prove problematic. But in evaluating the claim that inference constitutes a legitimate and independent argument from, one must explore the question of whether it is a contingent fact that, at least, most phenomena have explanations and that be so, that a given criterion, simplicity, were usually the correct explanation, it is difficult to avoid the conclusion that if this is true it would be an empirical fact about our selves in discovery of an reference to the best explanation.

Defenders of direct realism have sometimes appealed to an inference to the best explanation to justify prepositions about objects and or events in the external world, we might say that the best explanation of the appearances is that they are caused by external objects. However, even if this is true, as no doubt it is, it is unclear how establishing this general hypophysis helps justify specific ordination upon the proposition about objects and or event in the external world, such as that these particular appearances of a proposition whose inferrable properties about appearances caused by the red apple.

The point here is a general one: Cogent inductive inference from the inferrable proposition about appearances to propositions about objects and or events in the external world are available only with some added premiss expressing the requisite causal relation, or perhaps some other premiss describing some other sort of correlation between appearances and external objects. So there is no reason to think that indirect knowledge secured if the prepositions about its outstanding objectivity from realistic appearances, if so, epistemological direct realism must be denied. And since deductive and inductive inferences from appearance to objects and or events in the external world are propositions which seem to exhaust the options, no solution to its argument that sustains us of having knowledge of propositions about objects and events in the external world based on or upon propositions which describe the external world as it appears at which point that is at hand. So unless there is some solution to this, it would appear that scepticism concerning knowledge of the external world would be the most reasonable position to take

If the argument leading to some additional premise as might conclude that if there is knowledge of external objects if is directly and based on or upon the immediate knowledge of sensa, such that having knowledge of propositions about objects and or events in the external world based on or upon propositions about directly perceived sensa? Broadly speaking, there are two alternatives to both the perceptual indirect realism, and, of course, perceptual phenomenalism. In contrast to indirect t realism, and perceptual phenomenalism is that perceptual phenomenalism rejects realism outright and holds instead that (1) physical objects are collections of sensa, (2) in all cases of perception, at least one sensa is directly perceived, and, (3) to perceive a physical object one directly perceives some of the sensa which are constituents of the collection making up that object.



Proponents of each of these position try to solve the conditions not engendered to the species of additional persons ever of directly perceiving an external physical object, then no person ever gains immediate non-referential knowledge of such objects in different ways, in fact, if any the better able to solve this additional premise, that we would conclude that if there is knowledge of external objects than related doctrines for which time are aforementioned. The answer has seemed to most philosophers to be ‘no’, for in general indirect realists and phenomenalists have strategies we have already considered and rejected.

In thinking about the possibilities of such that we need to bear in mind that the term for propositions which describe presently directly perceived sensa. Indirect realism typically claim that the inference from its presently directly perceived sensa to an inductive one, specifically a causal inference from effects of causes. Inference of such a sort will perfectly cogent provides we can use a premiss which specifies that physical objects of a certain type are causally correlated with sensa of the sort currently directly perceived. Such a premiss will itself be justified, if at all, solely on the basis of propositions described presently directly perceived sensa. Certainly for the indirect realist one never directly perceives the causes of sensa. So, if one knows that, say, apples topically cause such-and-such visual sensa, one knows this only indirectly on the basis of knowledge of sensa. But no group of propositionally perceived sensa by itself supports any inferences to causal correlations of this sort. Consequently, indirect realists are in no p position to solve such categorically added premises for which knowledge is armed with additional premise, as containing of external objects , it is indirect and based on or upon immediate knowledge of sensa. The consequent solution of these that are by showing that propositions would be inductive and causal inference from effects of causes and show inductively how derivable for propositions which describe presently perceived sensa.

Phenomenalists have often supported their position, in part, by noting the difficulties facing indirect t realism, but phenomenalism is no better off with respect to inferrable prepositions about objects and events responsible for unspecific appearances. Phenomenalism construe physical objects as collections of sensa. So, to infer an inference from effects to causes is to infer a proposition about a collection from propositions about constituent members of the collective one, although not a causal one. Nonetheless, namely the inference in question will require a premise that such-and-such directly perceived sensa are constituents of some collection ‘C’, where ‘C’ is some physical object such as an apple. The problem comes with trying to justify such a premise. To do this, one will need some plausible account of what is mean t by claiming that physical objects are collections of sensa. To explicate this idea, however, phenomenalists have typically turned to analytical phenomenalism: Physical objects are collections of sensa in the sense that propositions about physical objects are analysable into propositions about sensa. And analytical phenomenalism we have seen, has been discredited.

If neither propositions about appearances or propositions accorded of the external world can be easily solved, then scepticism about external world is a doctrine we would be forced to adopt. One might even say that it is here that we locate the real problem of the external world. ‘How can we avoid being forced into accepting scepticism’?

In avoiding scepticism, is to question the arguments which lead to both propositional inferences about the external world an appearances. The crucial question is whether any part of the argument from illusion really forces us to abandon the incorporate perceptual direct realism. To help see that the answer is ‘no’ we may note that a key premise in the relativity argument links how something appears with direct perception: The fact that the dish appears elliptical is supposed to entail that one directly perceives something which is elliptical. But is there an entailment present? Certainly we do not think that the proposition expressed by ‘The book appears worn and dusty and more than two hundred years old’ entails that the observer directly perceives something which is worn and dusty and more than two hundred years old. And there are countless other examples like this one, where we will resist the inference from a property ‘F’ appearing to someone to claim that ‘F’ is instantiated in some entity.

Proponents of the argument from illusion might complain that the inference they favour works only for certain adjectives, specifically for adjectives referring to non-relational sensible qualities such as colour, taste, shape, and the like. Such a move, however, requires an arrangement which shows why the inference works in these restricted cases and fails in all others. No such argument has ever been provided, and it is difficult to see what it might be.

If the argument from illusion is defused, the major threat facing a knowledge of objects and or events in the external world primarily by perceiving them. Also, its theory is realist in addition that objects and events in the external world are typically directly perceived as are many of their characteristic features. Hence, there will no longer be any real motivation for it would appear that scepticism concerning knowledge of the external world would be the most reasonable position to take. Of course, even if perceptual directly realism is reinstated, this does not solve, by any means, the main reason for which that knowledge of objects in the external world seem to be dependent on some other knowledge, and so would not qualify as immediate and non-reference, along with many of their various features, exist independently of and are generally unaffected by perceivers and acts of perception in which they engage. That problem might arise even for one who accepts perceptual direct realism. But, there is reason to be suspicious in keeping with the argument that one would not know that one is seeing something blue if one failed to know that something looked blue. In this sense, there is a dependance of the former on the latter, what is not clear is whether the dependence is epistemic or semantic. It is the latter if, in order to understand what it is to see something blue, one must also understand what it is fort something to look blue. This may be true, even when the belief that one is seeing something blue is not epistemically dependent on or based upon the belief that something looks blue. Merely claiming, that there is a dependent relation does not discriminate between epistemic and semantic dependence. Moreover, there is reason to think it is not an epistemic dependence. For in general, observers rarely have beliefs about how objects appar, but this fact doe not impugn their knowledge that they are seeing, e.g., blue objects.

Along with ‘consciousness’, experience is the central focus of the philosophy of mind. Experience is easily thought of as a stream of private events, known only to their possessor, and baring at best problematic relationship to any other events, such as happening in an external world or similar stream of either possessors. The stream makes up the conscious life of the possessor. The stream makes up the conscious life of the possessor. With this picture there is a complete separation of mind and world, and in spite of great philosophical effort the gap, once opened, proves impossible to bridge both ‘idealism’ and ‘scepticism’ are common outcomes. The aim of much recent philosophy, therefore, is to articulate a less problematic conception of experience, making it objectively accessible, so that the facts about how a subject experiences the world are in principle as knowable as the facts about how the same subject digests food. A beginning on this task may be made by observing that experience have contents: ‘Content’ has become a technical term in philosophy for whatever it is a representation has that makes it semantically evaluable. Thus, a statement is something said to have a proposition or truth condition as its content: A term is sometimes said to have a concept as its content. Much less is known about how to characterize the contents of non-linguistic representations than is known about characterizing linguistic representations. ‘Content’ is a useful term precisely because it allows one to abstract away from questions about what semantic properties representations have, a representation’s content is just whatever it is that underwrites its semantic evaluation.

A great deal of philosophical effort has been lavished on the attempt to naturalize content, e.g., to explain in non-semantic, non-intentional terms what it is for something to be representation (have ‘content’), and what it is for something to give some particular content than some other. There appear to be only our types of theory that have been proposed: Theories that ground representation in (1) similarity, (2) covariance (3) functional role, and (teleology.

Similarity theories hold that ‘r’ represents ‘χ’ in virtue of being similar to ‘χ’. This has seemed hopeless to most as a theory of mental representation because it appears to require that things in the brain must share properties with the thingos they represent: To represent a cat as furry appears to require something furry in the brain. Perhaps, a notion of similarity that is naturalized and does not involve property sharing can be worked out, but it is not obvious how.

Covariance theories hold that r’s representing ‘χ’ is grounded in the fact that r’s occurrence covaries with that of ‘χ’. This is most compelling when one thinks about detection systems: The firing of neural structure in the visual system is said to represent vertical orientations if its firing covaries with the occurrence of vertical lines in the visual field. Dretske (1981) and Fodor (1987) have, in different ways, attempted to promote this idea into a general theory of content.

Teleological theories hold that ‘r’ represents ‘χ’ if it is r’s function to indicate (i.e., covary with) ‘χ’. Teleological theories differ depending on the theory of functions they import. Perhaps, the most important distinction is that between historical theories and functions, as historical theories individuate functional states, hence content, in a way that is sensitive to the historical development of the state, i.e., to factors such as the way the state was ‘learned’, or the way it evolved. An historical theory might hold that the function of ‘r’ is to indicate ‘χ’ only if the capacity to token ‘r’ was developed (selected, learned) because it indicates ‘χ’. Thus, a state physically indistinguishable from ‘r’ (physical stares being a-historical) but lacking r’s historical origins would not represent ‘χ’ according to historical theories.

Theories of representational content may be classified according to whether they are atomistic or holistic and according to whether they are externalistic or internalistic. Primarily, the alternative was for something expressed or implied by the intendment for integrating the different use of the terms ‘internalism’ and ‘externalisms’ has to do with the issue of how the content of beliefs and thoughts is determined: According to an internalist view of content, the content of such intentional states depend’s only on the non-relational, internal properties of the individual’s mind or brain, and not at all on his physical and social environment. While according to an externalist view, content is significantly affected by such external factors.

As with justification and knowledge, the traditional view of content has been strongly internalist in character. The main argument for externalisms derives from the philosophy of language, more specifically from the various phenomena pertaining to natural kind terms, indexical, etc., that motivate the views that have come to be known as ‘direct reference’ theories. Such phenomena seem, at least, to show that the belief of thought content that can be properly attributed to a person is dependent on facts about his environment - e.g., whether he is on Earth or Twin Earth, what in fact he is pointing at, the classificatorial criteria employed by the experts in his social group etc. - not just on what is going on internally in his mind or brain.

An objection to externalist accounts of content is that to know the contents of our beliefs or thoughts ‘from the inside’, simply by reflection. If content is dependent on external factors, then knowledge of content should depend on knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist way: If part or all of justification in which if only part of the content of a belief is inaccessible to the believer, then both the justifying status of other beliefs in relation to that content and the status of the content as justifying further beliefs will be similarly inaccessible, thus contravening the internalist requirement for justification. An internalist must insist that there are no justification relations of these sorts, that only internally accessible content can either be justified or justly anything else, but such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.

Atomistic theories take a representation’s content to be something that representation’s relation to other representations. What Fodor (1987) calls the crude causal theory, for example, takes a representation to be a
cow
- a mental representation with the same content as the word ‘cow’ - if its tokens are caused by instantiations of the property of being-a-cow, and this is a condition that places no explicit constraints on how
cow
’s must or might relate to other representations. Holistic theories contrast with atomistic theories in taking the relations a representation bears to others to be essential to its content. According to functional role theories, a representation is a
cow
if it behaves like a
cow
behave in inference.

Internalist theories take the content of a representation to be a matter determined by factors internal to the system that uses it. Thus, what Block (1986) calls ‘short-armed’ functional role theories are internalist. Externalist theories take the content of a representation to be determined, in part at least, by factors external to the system that uses it. Covariance theories, as well as teleological theories that invoker an historical theory of functions, take content to be determined by ‘external’ factors. Externalist theories (sometimes called non-individualistic theories, following Burge, 1979) have the consequence that molecule for molecule identical cognitive systems might yet harbor representations with different contents. This has given rise to a controversy concerning ‘narrow’ content. If we assume some form of externalist theory is correct, then contents is, in the first instance ‘wide’ content, i.e., determined in part by factors external to the representing system. On the other hand, it seems clear that, on plausible assumptions about how to individuate psychological capacities, internally equivalent systems must have the same psychological capacities. Hence, it would appear that wide content cannot be relevant to characterizing psychological equivalence, philosophers attached to externalist theories of content have sometimes attempted to introduce ‘narrow’ content, i.e., an aspect or kind of content that is equivalent in internally equivalent systems. The simplest such theory is Fodor’s idea (1987) that narrow content is a function from contexts (i.e., from whatever the external factors are) to wide contents.



The actions made rational by content-involving states are actions individuated in part by reference to the agent’s relations to things and properties in his environment, wanting to see a particular movie and believing that building over there is a cinema showing it makes rational the action of walking in the direction of that building. Similarly, for the fundamental case of a subject who has knowledge about his environment, a crucial factor in masking rational the formation of particular attitudes is the way the world is around him. One may expect, then, that any theory that links the attributing of contents to states with rational intelligibility will be committed to the thesis that the content of a person’s states depends in part upon his relations to the world outside him we can call this thesis of externalism about content.

Externalism about content should steer a middle course. On the one hand, the relations of rational intelligibility involve not just things and properties in the world, but the way they are presented as being - an externalist should use some version of Frége’s notion of a mode of presentation. Moreover, many have argued that there exists its ‘sense’, or ‘mode of presentation’ (something ‘intention’ is used as well). After all, ‘is an equiangular triangle’ and,‘is an equilateral triangle’ pick out the same things not only in the actual world, but in all possible worlds, and so refer - insofar as to the same extension, same intension and (arguably from a causal point of view) the same property, but they differ in the way these referents are presented to the mind. On the other hand, the externalist for whom considerations of rational intelligibility are pertinent to the individuation =of content is likely to insist that we cannot dispense with the notion of something in the world - an object, property or relation - being presented in a certain way, if we dispense with the notion of something external being presented in a certain way, we are in danger of regarding attributions of content as having no consequences for how an individual relates to his environment, in a way that is quite contrary to our intuitive understanding of rational intelligibility.

Externalism comes in more and less extreme versions: Consider a thinker who sees a particular pear, and thinks a thought ‘that pear is ripe’, where the demonstrative way of thinking of the pear expressed by ‘that pear’ is made available to him by his perceiving the pear. Some philosophers, including Evans (1982) and McDowell (1984), have held that the thinker would be employing a different perceptually based way of thinking were he perceiving a different pear. But externalism need not be committed to this, in the perceptual state that makes available the way of thinking, the pear is presented as being in a particular direction from the thinker, at a particular distance, and as having certain properties. A position will still be externalist if it holds that what is involved in the pear’s being so presented is the collective role of these components of content in making intelligible in various circumstances the subject’s relations to environmental directions, distances and properties of objects. This can be held without commitment to the object-dependence of the way of thinking expressed by ‘that pear’. This less strenuous form of externalism must, though, addressed the epistemological argument offered in favour of the more extreme versions, to the effect that only they are sufficiently world-involving.

Externalism about content is a claim about dependence, and dependence comes in various kinds. The apparent dependence of the content of beliefs on factors external to the subject can be formulated as a failure of supervenience of belief content upon facts about what is the case within the boundaries of the subject’s body. In epistemology normative properties such as those of justification and reasonableness are often held to be supervening on the class of natural properties in a similar way. The interest of supervenience is that it promises a way of trying normative properties closely to natural ones without exactly reducing them to natural ones: It can be the basis of a sort of weak naturalism. This was the motivation behind Davidson’s (1917-2003) attempt to say that mental properties supervene into physical ones - an attempt which ran into severe difficulties. To claim that such supervenience fails is to make a modal claim: That there can be two persons the same in respect of their internal physical states (and so in respect to those of their disposition that are independent of content-involving states), who nevertheless differ in respect of which beliefs there have. Putnam’s (1926- ) celebrated example of a community of Twin Earth , where the water-like substance in lakes and rain is not H2O, but some different chemical compound XYZ - ‘water’ - illustrates such failure of supervenience. A molecule-for-molecule replica of you on twin earth has beliefs to the effect that ‘water’ is thus-and-so. Those with no chemical beliefs on twin earth may well not have any beliefs to the effect that water is thus-and-so, even if they are replicas of persons on earth who do have such beliefs. Burge emphasized that this phenomenon extends far beyond beliefs about natural kinds.

In the case of content-involving perceptual states, it is a much more delicate matter to argue for the failure of supervenience, the fundamental reason for this is that attribution of perceptual content is answerable not only to factors on the input side - what in certain fundamental cases causes the subject to be in the perceptual state - but also to factors on the output side - what the perceptual state is capable of helping to explain amongst the subject’s actions. If differences in perceptual content always involve differences in bodily described actions in suitable counterfactual circumstances, and if these different actions always have distinct neural bases, perhaps, there will after all be supervenience of content-involving perceptual states on internal states

This connects with another strand in the abstractive imagination, least of mention, of any thinker who has an idea of an objective spatial world - an idea of a world of objects and phenomena which can be perceived but which are not dependent upon being perceived for their existence - must be able to think of his perception of the world as being simultaneously due to his position in the world, and to the condition of the world at that position. The very idea of a perceivable, objective spatial world brings with it the idea of the subject as being in the world, with the course of his perceptions due to his changing position in the world and to the more or less stable way the world is. That also, of perception it is highly relevant to his psychological self-awareness to have of oneself as a perceiver of the environment.



However, one idea that has in recent times been thought by many philosophers and psychologists alike to offer promise in the connection is the idea that perception can be thought of as a species of information-processing, in which the stimulation of the sense-organs constitutes an input to subsequent processing, presumably of a computational form. The psychologist J.J. Gibson suggested that the senses should be construed as systems the function of which is to derive information from the stimulus-array, as to ‘hunt for’ such information. He thought, least of mention, that it was enough for a satisfactory psychological theory of perception that his logical theory of perception that his account should be restricted to the details of such information pick-up, without reference to other ‘inner’ processes such as concept-use. Although Gibson has been very influential in turning psychology away from the previously dominant sensation-based framework of ideas (of which gestalt psychology was really a special case), his claim that reliance on such a notion of information is enough has seemed incredible to many. Moreover, its notion of ordinary one to warrant the accusation that it presupposes the very idea of, for example, concept-possession and belief that implicates the claim to exclude. The idea of information espoused bu Gibson (though it has to be said that this claim has been disputed) is that of ‘information about’, not the technical one involved in information theory or that presupposed by the theory of computation.

There are nevertheless important links between these diverse uses, however, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure. I never catch myself at any time without a perception and can never observe anything but the perception. However, the idea is that specifying the content of as perceptual experience involves saying what ways of filling out a space around the origin with surfaces, solids, textures, light and so forth, are consistent with the correctness or veridicality of the experience. Such contents are not built from propositions, concepts, senses or continuants of material objects.

Where the term ‘content’ was once associated with the phrase ‘content of consciousness’ to pick out the subjective aspects of mental states, its use in the phrase ‘perceptual content’ is intended to pick out something more closely akin to its old ‘form’ the objective and publicly expressible aspects of mental states. The content of perceptual experience is how the world is represented to be. Perceptual experiences are then counted as illusory or veridical depending on whether the content is correct and the world is as represented. In as much as such a theory of perception can be taken to be answering the more traditional problems of perception. What relation is there between the content of a perceptual state and conscious experience? One proponent of an intentional approach to perception notoriously claims that it is ‘nothing but the acquiring of true or false beliefs concerning the current state of the organism’s body or environment, but the complaint remains that we cannot give an adequate account of conscious perception, given the ‘nothing but’ element of this account. However, an intentional theory of perception need not be allied with any general theory of ‘consciousness’, one which explains what the difference is between conscious and unconscious states. If it is to provide an alternative to a sense-data theory, the theory need only claim that where experience is conscious. Its content is constitutive, at least in part, of the phenomenological character of that experience. This claim is consistent with a wide variety of theories of consciousness, even the view that no account can be given.

An intentional theory is also consistent with either affirming or denying the presence of subjective features in experience. Among traditional sense-data theorists of experience. H.H. Price attributed in addition an intentional content to perceptual consciousness. Whereby, attributive subjective properties to experience - in which case, labelled sensational properties, in the qualia - as well as intentional content. One might call a theory of perception that insisted that all features of what an experience is like ae determined by its intentional content, a purely intentional theory of perception.

Mental events, states or processes with content include seeing the door is shut, believing you are being followed and calculating the square root of 2. What centrally distinguishes states, events or processes - henceforth, simply stares - with content is that they involve reference to objects, properties or relations. A mental state exists a specific condition for a state with content a specific condition for a state with content to refer to certain things. When the state has correctness or fulfilment by whether its referents have the properties the content specifies for them.

This highly generic characteristic of content permits many subdivisions. It does not in itself restrict contents to conceptualized content, and it permits contents built from Frége’s sense as well as Russellian contents built from objects and properties. It leaves open the possibility that unconscious states, as well as conscious states, have contents. It equally, allows the states identified by an empirical computational psychology to have content. A correct philosophical understanding of this general notion of content is fundamental not only to the philosophy of mind and psychology, but also to the theory of knowledge and to metaphysics.

Perceptions make it rational for a person to form corresponding beliefs and make it rational to draw certain inferences. Belief s and desire s make rational the formation of particular intentions, and the performance o the appropriate actions. People are frequently irrational of course, but a governing ideal of this approach is that for any family of content, there is some minimal core of rational transition to or from states involving them, a core that a person must respect if his states are to be attributed with those contents of all rational interpretative relations. To be rational, a set of beliefs, desires, and actions as well s perceptions, decisions must fit together in various ways. If they do not, in the extreme case they fail to constitute a mind at all - no rationality, no agent. This core notion of rationality in philosophy f mind thus concerns a cluster of personal identity conditions, that is, holistic coherence requirements upon the system of elements comprising a person’s mind, it is as well as in philosophy where it is often succumbing to functionalism about content and meaning appears to lead to holism. In general, transitions between mental states and between mental states and depend on the contents of the mental states themselves. In consideration that I infer from sharks being in the water to the conclusion that people shouldn’t be swimming. Suppose I first think that sharks are dangerous, but then change my mind, coming to think that sharks are not dangerous. However, the content that the first belief affirms can’t be the same as the content that the second belief denies, because the transition relations, e.g., the inference form sharks being in the water to what people should do, so, I changed mt mind functionalist reply is to say that some transitions are relevant to content individuation, whereby others are not. Appeal to a traditional analytic clear/synthetic distinction clearly won’t do. For example, ‘dogs’ ‘and cats’ would have the same content on such a view. It could not be analytic that dogs bark or that cats meow, since we can imagine a non-barking breed of dog and a non-meaning breed of cat. If ‘Dogs are animals’ is analytic, as ‘Cats are animals’. If ‘Cats are adult puppies ‘. Dogs are not cats - but then cats are not dogs. So a functionalist’s account will not find traditional analytic inferences of ‘dogs’ from the meaning of ‘cat’. Other functionalist accept holism for ‘narrow content’, attempting to accommodate intuitions about the stability of content be appealing to wide content.

Within the clarity made of inference it is unusual to find it said that, an inference is a (perhaps very complex) act of thought by virtue of which act (1) I pass from a set of one or more propositions or statements to a proposition or statement and (2) it appears that the latter is true in the former is or are. This psychological characterization has occurred widely in the literature under more of less inessential variations.

It is natural to desire a better characterization of inference, but attempts to do so by construing a fuller psychological explanation fail to comprehend the grounds on which inference will be objectively valid - a point elaborated made by Gottlob Frége. And attempts to a better understand the nature about inference through the device of the representation of inference by formal-logical calculations to the informal inference they are supposed to represent or reconstruct, and (2) leaves us worried about the sense of such formal derivation. Are these derivations inferences? And aren’t informal inferences needed in order to apply the rules governing the constructions of forma derivation (inferring that this operation is an application of that formal rule)? These are concerns cultivated by, for example, of Wittgenstein. That, insofar as coming up with a good and adequate characterization of inference - and even working out what would count as a good and adequate characterization - is a hard and by no means nearly solved philosophical problem.

It is still, of ascribing states with content to an actual person has to proceed simultaneously with attribution of a wide range of non-rational states and capacities. In general, we cannot understand a person’s reasons for acting as he does without knowing the array of emotions and sensations to which he is subject: What he remembers and what he forgets, an how he reasons beyond the confines of minimal rationality. Even the content-involving perceptual states, which play a fundamental role in individuating content, cannot be understood purely in terms relating to minimal rationality. A perception of the world as being a certain way is not (and could not be) under a subject’s rational control. Though it is true and important that perceptions give for forming beliefs, the beliefs for which they fundamentally provide reason - observational beliefs about the environment - have contents which can only be elucidated by inferring which can only be elucidated by inferring back to perceptual experience. In this respect (as in others), perceptual states defer from those beliefs and desires that are individuated by mentioning what they provide reasons for judging or doing: For frequently these latter judgements and actions can be individuate without reference back to the states that provide reasons for them.

What is the significance for theories of content to the fact that it is almost certainly adaptive for members of a species to have a system of states with representational content which are capable of influencing their actions which are capable? According to teleological theories of content, a constitutive account of content - one which says what it is for a state to have a given content - must make use of the notions of natural function and teleology. The intuitive idea is that for a belie f state to have a given content ‘p’ is for the belief-forming mechanism which produced it to have the function (perhaps derivatively) of producing that state only when it is the case that ‘p’. But if content itself proves to resist elucidation in terms of natural function and selection, it is still a very attractive view that selection must be mentioned - such as a sentence - with a particular content, even though that content itself may be individuated by other means.

Contents are normally specified by ‘that . . .’ clauses, and it is natural to suppose that a content has the same kind of sequential and hierarchical structure as the sentence that specifies it. This supposition would by widely accepted for conceptual content. It is, however, a substantive thesis that all content is conceptual. One way of treating one sort of perceptual content is to regard the content as determined by a spatial type, the type under which the region of space around the perceiver must fall if the experience with that content is to represent the environment correctly. The type involves a specification of surfaces and features in the environment, and their distances and direction from the perceiver’s body as origin. Supporters of the view that the legitimacy of using these spatial types in giving the content of experience does not undermine the thesis that all content is conceptual, such supporters will say that the spatial type is just a way of capturing what can equally be captured by conceptual components such as ‘that distance’, or ‘that direction’, where these demonstratives are made available by the perception in question.

In specifying representative realism the significance this theory holds that (1) there is a world whose existence and nature is independent of it, (2) perceiving an object located in that external world necessarily involves causally interacting with that object, and (3) the information acquired in perceiving an object is indirect: It is information most immediately about the perceptual experience caused in us by the object, and only derivatively about the object itself. Traditionally, representative realism has been allied with an act/object analysis of sensory experience. In terms of representative realism, objects of perception (of which we are ‘independently aware’) are always distinct from objects of experience (of which we are ‘directly aware’) Meinongians, however, may simply that object of perception as existing objects of experience.

Armstrong (1926- ) not only sought to explain perception without recourse to sense-data or subjective qualities but also sought to equate the intentionality of perception with that of belief. There are two aspects to this: the first is to suggest that the only attitude towards a content involved in perception is that of believing, and the second is to claim that the only content involved in perceiving is that which a belief may have. The former suggestion faces an immediate problem, recognized by Armstrong, of the possibility of having a perceptual experience without acquiring the correspondence belief. One such case is where the subject already possesses the requisite belief - rather than leading to the acquisition of, belief. The more problematic case is that of disbelief in perception. Where a subject has a perceptual experience but refrains from acquiring the correspondence belief. For example, someone familiar with Muller-Lyer illusion, in which lines of equal length appear unequal, is likely to acquire the belief that the lines are unequal on encountering a recognizable example of the illusion. Despite that, the lines may still appear unequal to them.

Armstrong seeks to encompass such cases by talk of dispositions to acquire beliefs and talk of potentially acquiring beliefs. On his account this is all we need say to the psychological state enjoyed. However, once we admit that the disbelieving perceivers still enjoys a conscious occurrent experience, characterizing it in terms of a disposition to acquire a belief seems inadequate. There are two further worries. One may object that the content of perceptual experiences may play a role in explaining why a subject disbelievers in the first place: Someone may fail to acquire a perceptual belief precisely because how things appear to her is inconsistent with her prior beliefs about the world. Secondly, some philosophers have claimed that there can be perception without any correspondence belief. Cases of disbelief in perception are still examples of perceptual experience that impinge on belief: Where a sophisticated perceiver does not acquire the belief that the Müller-Lyer lines are unequal, she will still acquire a belief about how things look to her. Dretske (1969) argues for a notion of non-epistemic seeing on which it is possible for a subject to be perceiving something whole lacking any belief about it because she has failed to notice what is apparent to her. If we assume that such non-epistemic seeing , nevertheless, involves conscious experience e it would seem to provide another reason to reject Armstrong’s view and admit that if perceptual experiences are intentional states then they are a distinct attitude-type from that of belief. However, even if one rejects Armstrong’s equation of perceiving with acquiring beliefs or disposition to believe, one may still accept that he is right about the functional links between experience and belief, and the authority that experience has over belief, an authority which, can nevertheless be overcome.

It is probably true that philosophers have shown much less interest in the subject of the imagination during the last fifteen tears or so than in the period just before that. It is certainly true that more books about the imagination have been written by those concerned with literature and the arts than have been written by philosophers in general and by those concerned with the philosophy of mind in particularly. This is understandable in that the imagination and imaginativeness figure prominently in artistic processes, especially in romantic art. Still, those two high priests of romanticism, Wordsworth and Coleridge, made large claims for the role played by the imagination in views of reality, although Coleridge’s thinking on this was influenced by his reading of the German philosopher of the late eighteenth and early nineteenth centuries, particularly Kant and Schelling. Coleridge distinguished between primary and secondary imagination, both of them in some sense productive, as opposed to merely reproductive. Primary imagination is involved in all perception of the world in accordance with a theory which, as Coleridge derived from Kant, while secondary imagination, the poetic imagination, is creative from the materials that perception provides. It is this poetic imagination which exemplifies imaginativeness in the most obvious way.

Being imaginative is a function of thought, but to use one’s imagination in this way is not just a matter of thinking in novel ways. Someone who, like Einstein for example, presents a new way of thinking about the world need not be by reason of this supremely imaginative (though of course, he may be). The use of new concepts or a new way of using already existing concepts is not in itself an exemplification of the imagination. What seems crucial to the imagination is that it involves a series of perspectives, new ways of seeing things, in a sense of ‘seeing’ that need not be literal. It thus involves, whether directly or indirectly, some connection with perception, but in different ways. To make clear in the similarities and differences between seeing proper and seeing with the mind’s eye, as it is sometimes put. This will involve some consideration of the nature and role of images, least of mention, that there is no general agreement among philosophers about how to settle neurophysiological problems in the imagery of self.

Connections between the imagination and perception are evident in the ways that many classical philosophers have dealt with the imagination. One of the earliest examples of this, the treatment of ‘phantasia’ (usually translated as ‘imagination’) in Aristotles ‘De Anima III. 3. seems to regard the imagination as a sort of half-way house between perception and thought, but in a way which makes it cover appearances in general, so that the chapter in question has as much to do with perceptual appearances, including illusions, as it ha s to do with, say. Imagery. Yet, Aristotle also emphasizes that imagining is in some sense voluntary, and that when we imagine a terrifying scene we are not necessarily terrified, any more than we need be when we see terrible things in a picture. How that fits in with the idea that an illusion is or can be a function of the imagination is less than clear. Yet, some subsequent philosophers, Kant on particular. Followed in recent times by P.F. Strawson have maintained that all perception involves the imagination, in some sense of that term, in that some bridge is required between abstract thoughts and their perceptual instance. This comes out in Kant’s treatment of what he calls the ‘schematism’, where he rightly argues that someone might have an abstractive understanding of the concept of a dog without being able to recognize or identify any dogs. It is also clear that someone might be able to classify all dogs together without any understanding of what a dog is. The bridge that needs to be provided to link these two abilities Kant attributes to the imagination.

In so arguing Kant goes, as he so often does, beyond Hume who thought of the imagination in two connected ways. Firs t, there is the fact that there exist. Hume thinks, ideas which are either copies of impressions provided by the senses or are derived from these. Ideas of imagination are distinguished from those of memory, and both of these from impression and sense, by their lesser vivacity. Second, the imagination is involved in the processes, mainly associated of ideas, which take one form on ideas to another, and which Hume uses to explain, for example, our tendency to think of objects as having no impression on them, ideas or less images, is the mental process which takes one from one idea to another and thereby explains our tendency to believe things go beyond what the senses immediately justify. The role which Kant gives to the imagination in relation to perception in general is obviously a wider and fundamental role than that Hume allows. Indeed, one might take Kant to be saying that were there not the role that he, Kant insists on there would be no place for the role which Hume gives it. Kant also allows for a free use of the imagination in connection with the arts and the perceptions of beauty, and this is a more specified role than that involved in perception overall.

In the retinal vision by the seeing of things we normally see them as such-and-such, is to be construed and in how it relate s to a number of other aspects of the mind ‘s functioning - sensation, concept and other things of other aspects of the mind’s functioning - sensation, concepts, and other things involved in our understanding of things, belief and judgement, the imagination, our action is related to the world around us, and the causal processes involved in the physics, biology and psychology of perception. Some of the last were central to the considerations that Aristotle raised about perception in his ‘De Anima’.

Nevertheless, there are also special, imaginative ways of seeing things, which Wittgenstein (1889-1951) emphasized in his treatment of ‘see-as’ in his ‘Philosophical Investigations II. Xi. And on a piece paper as standing up, lying down, hanging from its apex and so on is a form of ‘seeing-as’ which is both more special and more sophisticated than simply seeing it as a triangle. Both involve the application of concepts to the objects of perception, but the way in which this is done in the two cases is quite different. One might say that in the second case one has to adopt a certain perceptive, a certain point of view, and if that is right it links up with what had been said earlier about the relation and difference between thinking imaginatively and thinking in novel ways.

Wittgenstein (1953) used the phrase ‘an echo of a thought is sight’ in relation to these special ways of seeing things, which he called ‘seeing aspects’. Roger Scruton has spoken of the part played in it all by ‘unasserted thought’, but the phrase used by Wittgenstein brings out more clearly one connection between thought and a form of sense-perception. Wittgenstein (1953) also compares the concepts of an aspect and that of seeing-as with the concept of an image, and this brings out a point about the imagination that has not been much evident in what has been said so far - that imagining something is typically a matter of picturing it in the mind and that this involves images in some way, however, the picture view of images has come under heavy philosophical attack. First, there have been challenges to the sense of the view: Mental images are not with real eyes: They cannot be hung on real walls and they have no objective weight or colour. What, can it mean to say, that images are pictorial? Secondly, there have been arguments that purport to show that the view is false. Perhaps, the best known of these is founded on the charge that the picture theory cannot satisfactorily explain the independency of many mental images. Finally, there have been attacks on the evidential underpinning of the theory. Historically, the philosophical claim that images are picture-like rested primarily on an appeal to introspection. And today less about the mind than was traditionally supposed. This attitude towards introspection has manifested itself in the case of imagery in the view that what introspection really shows about visual images is not that they are pictorial but only that what goes on in imagery is experimentally much like what goes on in seeing. This aspect is crucial for the philosophy of mind , since it raises the question of the status of images, and in particular whether they constitute private objects or stares in some way. Sartre (1905-80), in his early work on the imagination emphasized, following Husserl (1859-1938), that images are forms of consciousness of an object, but in such a way that they ‘present’ the object as not being: Wherefore, he said, the image ‘posits its object as nothingness’, such a characterization brings out something about the role of the form of consciousness of which the having of imagery may be a part, in picturing something the images are not themselves the object of consciousness. The account does less, however, to bring out clearly just what images are or how they function.

As part of an attemptive grappling about the picturing and seeing with the mind’s eye, Ryle (1900-76 ), has argued that in picturing, say, Lake Ontario, in having it before the mind’s eye, we are not confronted with a mental picture of Lake Ontario: Images are not seen. We nevertheless, can ‘see’ Lake Ontario, and the question is what this ‘seeing’ is, if it is not seeing in any direct sense. One of the things that may make this question difficult to answer is the fact that people’s images and their capacity for imagery vary, and this variation is not directly related to their capacity for imaginativeness. While an image may function in some way as a ‘presentation’ in a train of imaginative thought, such thought does not always depend on that: Images may occur in thought which are not really representational at all, are not, strictly speaking, ‘of’ anything. If the images are representational, can one discover things from one’s images that one would not know from otherwise? Many people would answer ‘no’, especially if their images are generally fragmentary, but it is not clear that this is true for everyone. What is more, and this affects the second point, fragmentary imagery which is at best ancillary to process of though in which it occurs may not be in any obvious sense representational, even if the thought itself is ‘of’ something.

Another problem with the question what it is to ‘see’ Lake Ontario with the mind’s eye is that the ‘seeing’ in question may or may not be a direct function of ‘memory’. For one who has seen Lake Ontario, imaging it may be simply a matter of reproduction in some form in the original vision, and the vision may be reproduced unintentionally and without any recollection of what it is a ‘vision’ of. For one who has never been it the task of imagining it depends most obviously on the knowledge of what sort of thing Lake Ontario is and perhaps on experiences which are relevant to that knowledge. It would be surprising, to say the least, if imaginative power could produce a ‘seeing’ that was not constructed from any previous seeing. But that the ‘seeing’ is not itself a seeing in the straightforward sense is clear, and on this negative point what Ryle says, and other s have said, seems clearly right. As to what ‘seeing’ is in a positive way, Ryle answers that it involves fancying something and that this can be assimilated to pretending. Fancying that one is seeing Lake Ontario is thus, at least, like pretending that one is doing that thing. But is it?

Along the same course or lines, there is in fact a great difference between say, imaging that one is a tree and pretending to be a tree. Pretending normally involves doing something, and even when there is no explicit action on the part of the pretender, as when he or she pretends that something or other is the case, there is at all events an implication of possible action. Pretending to be a tree may involve little more that standing stock-still with one’s arms spread out like branches. To imagine being a tree (something that is founded that some people deny being possible, which is to my mind a failure of imagination) need imply no action whatever, (Imagining being a tree is different in this respect from imagining that one is a tree, where this means believing falsely, that one is a tree, one can imagine being a tree without this committing one to any beliefs on that score). Yet, of imagining being a tree does seem to involve adopting the hypothetical perspective of a tree, contemplating perhaps, that it is like to be a fixture in the ground with roots growing downward and with branches (somewhat like arms) blown by the wind and with birds perching on them.

Imagining something seems in general to involve change of identity on the part of something or other, and in imagining being something else, such as a tree, the partial change of identity contemplated is in oneself. The fact that the change of identity contemplated cannot be completely does not gainsay, the point that it is a change o f identity which is being contemplated. One might raise the question whether something about the ‘self’ is involved in all imaginings. Berkeley (1685-17530 even suggests that imagining a solitary unperceived tree involves a contradiction, in that a imagine that is to imagine oneself perceiving it. In fact, there is a difference between imagining a object, solitary or not, and imagining oneself seeing that object. The latter certainly involves putting oneself imaginatively in the situation pictured: The former involves contemplating the object from a point of view, that from that point of view which one would oneself have if one were viewing that point of view to which reference has already been made, in a way that clearly distinguishes picturing something from merely thinking of it.

This does not rule out the possibility that an imagine might come into one’s mind which one recognizes as some kind of depiction of a scene. But when actually picturing a scene, it would not be right to say that one imagines the scene by way of a contemplation of an image which plays the part of as picture of it. Moreover, it is possible to imagine a scene without any images occurring, the natural interpretation of which would be that they are pictures of that scene. It is not impossible for one imagining say, the GTA is to report on request the occurrences of images which are not in any sense pictures of the GTA -, not of that particular city and perhaps not even of a city at all. That would not entail that he or she was not imagining the GTA: A report to or associated with the GTA, thought by others to be of the GTA.

This raises a question which is asked by Wittgenstein (1953) -, ‘What makes my image of him into an image of him’? To which Wittgenstein replies ‘Not its looking like him’, and furthering he suggests that a person’s account of what his imagery represents is decisive. Certainly it is so when the process of imagination which involves the imagery is one that the person engages in intentionality. The same is not true, as Wittgenstein implicitly acknowledges in the same context, if the imagery simply comes to mind without there being any intention, in that case, one might not even know what the image is an image of.

Nevertheless, all this complicates the question what the status of mental images is. However, it might seem that they stand in relation to imagining as ‘sensations’ stand to perception, except that the occurrence of sensations is a passive set-organization of specific presentiments, while the occurrence of an image can be intentional, and in the context of an active flight of imagination is likely to be so. Sensations give perceptions a certain phenomenal character, providing their sensuous, as opposed to conceptual content. Intentional action has interesting symmetric and asymmetric to perception. Like perceptual experience, the experiential component of intentional action is causally self-referential. If, for example, I can now walking to my car, then the condition of satisfaction of the preset experience is that there be certain bodily movements, and that this very experience of acting cause those bodily movements. Furthering, like perceptual experience, the experience of acting is topically a conscious mental event, is that perception is always concept-dependent at least in the sense that perceivers must be concept possessors and users, and almost certainly the sense that perception entails concept-use in its application to objects. It is, at least, arguable that those organisms that react in a biologically useful way to something but that are such that the attribution of concepts them is implausible, should not be said to perceive those objects, however, much the objects figure causally in their . There are, nevertheless, important links between these diverse uses. We might call a theory which attributes to perceptual states as content in the new sense as ‘an intentional theory’ of perception. On such a view, perceptual states represent to the subject how her environment and body are. The content of perceptional experiences is how the world is presented to be. Perceptual experiences are then counted as illusory or veridical depending on whether the content is correct and the world is as represented. In as such as such a theory of perception can be taken to be answering the more traditional problems of perception, such will deal with the content of consciousness. The ruminative contemplation, where with concepts looms largely and has, perhaps the overriding role, it still seems necessary for our thought to be given a focus in thought-occurrences such as images. These have sometimes been characterized as symbols which are the material of thought, but the reference to symbols is not really illuminating. Nonetheless, while a period of thought in which nothing of this kind occurs is possible, the general direction of thought seems to depend on such things occurring from time to time. The necessary correlations that are cognizant, insofar as when we get a feeling, or an ‘impression’, thereof: Which of us attribute a necessity to the relation between things of two particular kinds of things. For example, an observed correlation between things of two kinds can be seen to produce in everyone a propensity to expect a thing to the second sort given an experience of a thing on the first sort. That of saying, there is no necessity in the relations between things that happen in the world, but, given our experience and the way our minds naturally work, we cannot help thinking that there is. In the case of the imagination images seem even more crucial, in that without therm it would be difficult, to say, at least, for the point of view or perspective which is important for the imagination to be given a focus.

Of the same lines, that it would be difficult for this to be so, than impossible, since it is clear that entertaining a description of a scene, without there being anything that a vision of it, could sometimes give that perceptive. The question still arises whether a description could always do quite what an image can do in this respect. The point is connected with an issue over which there has been some argument among psychologists, such as, S.M. Kosslyn and Z.W. Pylyshyn, concerning what are termed ‘analogue’ versus ‘propositional’ theories of representation. This is an argument concerning whether the process of imagery is what Pylyshyn (1986) calls ‘cognitively penetrable’, i.e., such that its function is affected by beliefs or other intellectual processes expressible in propositions, or whether, it can be independent of cognitive processes although capable itself of affecting the mental life because of the pictorial nature of images (the ‘analogue medium’). One example, which has embarked upon that argument, is that in which people are asked whether two asymmetrically presented figures can be made to coincide, the decision on which may entail some kind of material rotation of one or more of the figures. Those defending the ‘analogue’ theory, point to the fact that there is some relation between the time taken and the degree of the rotation required, this suggests that some processes involving changing images is identify with. For some who has little or no imagery this suggestion, may seem unintelligible. Is it enough for one to go through an intellectual working out of the possibilities, as based on features of the figures that are judged relevant? This could not be said to be unimaginative as long as the intellectual process involved reference to perceptive or points of view in relation to the figures, the possibility of which the thinker might be able to appreciate. Such an account of the process of imagination cannot be ruled out, although there are conceivable situations in which the ‘analogue’ process of using images might be easier. Or, at least, it might be easier for those who have imagery most like the actual perception of a scene: For others situation might be difficult.

The extreme of the former position is probably provided by those who have so-called ‘eidetic’ imagery, where having an image of a scene is just like seeing it, and where, if it is a function of memory as it most likely is, it is clearly possible to find out details of the scene imagined by introspection of the image. The opposite extreme is typified by those for whom imagery, to the extent it occurs at all, is at best ancillary to propositionally styled thought. But, to repeat the point made unasserted, will not count as imagination unless it provides a series of perspectives on its object. Because images are or can be perceptual analogues and have a phenomenal character analogous to what sensations provide in perception they are ,most obviously suited. In the working of the mind, to the provision of those perspectives. But in a wider sense, imagination enters the picture whenever some link between thought and perception is required, as well as making possible imaginative forms of seeing-as. It may thus justifiably be regarded as a bridge between perception and thought.

The plausibility to have a firm conviction in the reality of something as, perhaps, as worthy of belief and have no doubt or unquestionably understood in the appreciation to view as plausible or likely to apprehend the existence or meaning of comprehensibility whereas, an understandable vocation as to be cognizant of things knowably sensible. To a better understanding, an analogous relationship may prove, in, at least, the explanation for the parallels that obtain between the ‘objects of contents of speech acts’ and the ‘objects or contents of belief’. Furthermore, the object of believing, like the object of saying, can have semantic properties, for example:

What Jones believes is true.

And:

What Jones believes entails what Smith believes.

One plausible hypophysis, then, is that the object of belief is the same sort of entity as what is uttered in speech acts (or what is written down).

The second theory also seems supported by the argument of which our concerns conscribe in the determination of thought, for which our ability to think certain thoughts appears intrinsically connected with the ability to think certain others. For example, the ability to think that John hit Mary goes hand in hand with the ability to think that Mary hits John, but not with the ability to think that Toronto is overcrowded. Why is this so? The ability to produce or understand certain sentences is intrinsically connected with the ability to produce or understand certain others. For example, there are no native speakers of English who know how to say ‘John hits Mary’, but who do not know how to say ‘Mary hits John’. Similarly, there are no native speakers who understand the former sentence but not the latter. These facts are easily explained if sentences have a syntactic and semantic structure, but if sentences are taken to be atomic, these facts are a complete mystery. What is true for sentences is true also for thoughts. Thinking thoughts involving manipulating mental representations. If mental representations with a propositional content have a semantic and syntactic structure like that of sentences. It is no accident that one who is able to think that John hits Mary is thereby, able to think that Mary hits John. Furthermore, it is no accident that one who can think these thoughts need not thereby be able to think thoughts, having different components - for example, the thought that Toronto is overcrowded. And what goes here for thought goes for belief and the other propositional attitudes.

If concepts of the simple (observational) sort were internal physical structures that had in this sense, an information-carrying function, a function they acquired during learning, then instances as such types would have a content that (like a belief) could be either true or false. After learning, tokens of these structure types, when caused by some sensory stimulation, would ‘say’ (i.e., mean) what it was their function to ‘tell’ (inform about). They would therefore, quality as beliefs - at least of the simple observational sort.

Any information-carrying structure carries all kinds of information. If, for example, it carriers information ‘A’, it must also carry the information that ‘A’ or ‘B’. As I conceived of it, learning was supposed to be a process in which a single piece if this information is selected for special treatment, thereby becoming the semantic content - the meaning - of subsequent tokens of that structure type. Just as we conventionally give artefacts and instruments information-providing functions, thereby making their activities and states - pointer readers, flashing lights, and so on - representations of the conditions, so learning converts neural states that carry information - ‘pointers readers’ in the head, so to speak - into structures that have the function to providing some vital piece of the information they carry are also presumed to serve as the meanings of linguistic items, underwriting relations of translation, definition, synonymy, antinomy and semantic implications. Much work in the semantics of natural language takes itself to be addressing conceptual structure.

Concepts have also been thought to be the proper objects of ‘philosophical analysis’. ‘Analytic’ philosophers when they ask about the nature of justice, knowledge or piety and expect to discover answers by means of introspective reflection, yet the expectation that one sort of thing could serve all these tasks went hand in hand with what has come to be called the ‘Classical View’ of concepts, according of conditions that are individually necessary and jointly sufficient for their satisfaction, which are known to any competent user of them, the standard example is the especially simple one [bachelor], which seems to be identified to [eligible unmarried male]. A more interesting, but problematic one has been [knowledge], whose analysis was traditionally thought to be [justified true belief].

The notional representation that treat relations as a subclass of property brings to contrast with property is ‘concept’, but one must be very careful, since ‘concept’, has =been used by philosophers and psychologists to serve many different purposes. One use has it that certain factors of conceiving of some aspect of the world. As such, concepts have a kind of subjectivity as having to contain the different individuals might, for example, have different concepts of birds, one thinking of them primarily as flying creatures and the other as feathered. Concepts in this sense are often described as a species of ‘mental representation’, and as such they stand in sharp contrast to the notion of a property, since a property is something existing in the world. However, it is possible to think of a concept as neither mental nor linguistic and this would allow, though it doesn’t dictate, that concepts and properties are the same kind of thing. Nonetheless, the function of learning is naturally to develop, as things inasmuch as they do, in some natural way, either (in the case of the senses) from their selectional history or (in the casse of thought) from individual learning. The result is a network of internal representations that have, in different ways, the power to represent: Experiences and beliefs.

This does, however, leave a question about the role of the senses in this total cognitive enterprise. If it is learning that, by way of concepts, is the source of the representational powers of thought, from whence comes the representational powers of experience? Or should we even think of experience in representational terms? We can have false beliefs, but are there false experiences? On this account, then, experience and thought are both representational. The difference resides in the source of heir representational powers, learning in the case of thoughts, evolution in the case of experience.

Though, perception is always concept-dependent, at least in the sense that perceivers must be concept possessors and users, and almost certainly in the sense that perception entails concept-use in its application to objects. It is at least, arguable that those organisms that react in a biologically useful way to something, but that are such that the attribution of concepts to them is implausible, should not be said to perceive those objects, however, much is as there is much that the object figures causally in their . Moreover, that consciousness presents the object in such a way that the experience has certain phenomenal character, which derived from the sensations which the causal processes involved set up. This is most evident is the case of ‘touch’ (which being a ‘contact sense’ provides a more obvious occasion for speaking of sensations than do ‘distance senses’ such as sight). Our tactual awareness of the texture of a surface is, to use a metaphor, ‘coloured’ by the nature of the sensations that the surface produces in our skin, and which we can be explicitly aware of if our attention is drawn to them (something that gives one indication of how attention too is involved in perception).

It has been argued, that the phenomenal character of n experience is detachable from its contentual content in the sense that an experience of the same phenomenal character could occur even if the appropriate concepts were not available. Certainly the reverse is true - that a concept-mediated awareness of an object could occur without any sensation-mediated experience - as in an awareness of something absent from us. It is also the case, however, that the look of something can be completely changed by the realization that it is to be seen as ‘χ’ rather than ‘y’. To the extent that, that is so, the phenomenal character of a perceptual experience should be viewed as the result of the way in which sensations produced in us by objects blend with our ways of thinking of and understanding those objects (which, it should be noted, are things in the world and should not be confused with the sensations which they produce).

In the study of other parts of the natural world, we agree to be satisfied with post-Newtonian ‘best theory’ arguments: There is no privileged category of evidence that provides criteria for theoretical constructions. In the study of humans above the neck, however, naturalistic theory does not suffice: We must seek ‘philosophical explanations’, require that theoretical posits specified terms of categories of evidence selected by the philosopher (as, in the radically upon unformulated notions such as ‘access in principle’ that have no place in naturalistic inquiry.

However, one evaluates these ideas, that clearly involve demands beyond naturalism, hence, a form of methodological/epistemological dualism. In the absence of further justification, it seems to me fair to conclude, that inability to provide ‘philosophical explanation’ or a concept of ‘rule-following’ that relies on access to consciousness (perhaps ‘in principle’) is a merit of a naturalistic approach, not a defect.

A standard paradigm in the study of language, given its classic form by Frége, holds that there is a ‘store of thoughts’ that is a common human possession and a common public language in which these thoughts are expressed. Furthermore, this language is based on a fundamental relation between words and things - reference or denotation - along with some mode of fixing reference )sense, meaning). The notion of a common public language has never been explained, and seems untenable. It is also far from clear why one should assume the existence of a common store of thoughts: The very existence of thoughts had been plausibly questioned, as a misreading of surface grammar, a century earlier.

Only those who share a common world can communicate, only those who communicate can have the concept of an inter-subjective, objective world. As a number of things follow. If only those who communicate have the concept of an objective world, only those who communicate can doubt whether an external world exists. Yet I is impossible seriously (consistently) to doubt the existence of other people with thoughts, or the existence of an external world, since to communicate is to recognize the existence of other people in a common world. Language, that is, communication with others, is thus essential to propositional thought. This is not because it is necessary to have the words to express a thought (for it is not); it is because the ground of the sense of objectivity is inter-subjectivity, and without the sense of objectivity, of the distinction between true and false, between what is thought to be and what is the case, there can be nothing rightly called ‘thought’.

Since words are also about things, it is natural to ask how their intentionality is connected in that of thoughts. Two views have been advocated: One view takes thought content to be self-subsistent relative to linguistic content, with the latter dependent on or upon the former. The other view takes thought content to be derivative upon linguistic content, so that there can be no thought without a bedrock of language. Appeals to language at this point are apt to founder on circularity, since words take on the powers of concepts only insofar as there express them. Thus, there seems little philosophical illumination to be got from making thought depend upon language. Nonetheless, it is not entirely clear what it amounts to assert or deny, that there is an inner language of thought. If it means merely that concepts (thought-constituents) are structured in such a way as to be isomorphic with spoken language, then the claim is trivially true, given some natural assumption. But if it means that concepts just are ‘syntactic’ items orchestrated into strings of the same, then the claim is acceptable only in so far as syntax is an adequate basis for meaning - which, on the face of it, it is not. Concepts in doubt have combinatorial powers comparable to those of words, but the question is whether anything else can plausibly be meant by the hypothesis of an inner language.

Yet, it appears undeniable that the spoken language does not have autonomous intentionality, but instead derives its meaning from the thoughts of speakers - though language may augment one’s conceptual capacities. So thought cannot post-date spoken language. The truth seems to be that in human psychology speech and thought are interdependent in many ways, but that there is no conceptual necessity about this. The only ‘language’ on which thought essentially depends is that of the structured system of concepts itself: Thought depends on or upon there being isolable concepts that can join with others to produce complete propositions. But this is merely to draw attention to a property of any system of concepts must have; it is not to say what concepts are or how they succeed in moving between thoughts as they do.

Finally, there is the old question of whether, or to what extent, a creature who does not understand a natural language can have thoughts. Now it seems pretty compelling that higher mammals and humans raised without language have their controlled by mental states that are sufficiently like our beliefs, desires and intentions to share those labels. It also seems easy to imagine non-communicating creatures who have sophisticated mental lives (they build weapons, dams, bridges, have clever hunting devices, etc.). at the same time, ascriptions of particular contents to non-language-using creatures typically seem exercises in loose speaking (does the dog really believe that there is a bone in the yard?), and it is no accident that, as a matter of fact, creatures who do not understand a natural language have at best primitive mental lives. There is no accepted explanation of these facts. It is possible that the primitive mental failure to master natural languages, but the better explanation may be Chomsky’s, that animals lack a special language faculty to our species, as, perhaps, the insecurity that is felt, may at best resemble the deeper of latencies that cradles his instinctual primitivities, that have contributively distributed the valuing qualities that amount in the result to an ‘approach-avoidance’ theory. As regards the wise normal human raised without language; this might simply be due to the ignorance and lack of intellectual stimulation such a person would be predetermined to. It also might be that higher thought requires a neural language with a structure comparable to that of a natural language, and that such neural languages are somehow acquired: As the child learns its native language. Finally, the ascription states of languageless creatures is a difficult topic that needs more attention. It is possible that as we learn more about the logic of our ascriptions of propositional content, we will realize that these ascriptions are egocentrically based on a similarity to the language in which we express our beliefs. We might then learn that we have no principled basis for ascribing propositional content to a creature who does not speak something a lot like one of our natural languages, or who does not have internal states with natural-language-like structure. It is somewhat surprising how little we know about thought’s dependence on language.

The relation between language and thought is philosophies chicken-or-egg problem. Language and thought are evidently importantly related, but how exactly are they related? Does language come first and make thought possible, or is it vice versa? Or are they on a par, each making the other possible.

When the question is stated this generally, however, no unqualified answer is possible. In some respects thought is prior, and in other respects neither is prior. For example, it is arguable that a language is an abstract pairing of expressions and meaning, a function in the set-theoretic sense from expressions onto meaning. This makes sense of the fact that Esperanto is a language no one speaks, and it explains why it is that, while it is a contingent fact that ‘La neige est blanche’ means that snow is white among the French. It is a necessary truth that it means that in French. But if natural languages such as French and English are abstract objects in this sense, then they exist in possible worlds in which there are no thinkers in this respect, then, language as well as such notions as meaning and truth in a language, is prior to thought.

But even if languages are construed as abstract expression-meaning pairings, they are construed that way as abstractions from actual linguistic practice - from the use of language in communicative behaviour - and there remains a clear sense in which language is dependent on thought. The sequence of inscribes ‘Naples is south of Rome’ means among us that Naples is south of Rome. This is a contingent fact, dependent on the way we use ‘Naples’. Rome and the other parts of that sentence. Had our linguistic practices been different, ‘Naples is south of Rome’ means among us that Naples is south of Rome has something to do with the beliefs and intentions underlying our use of the words and structures that compose the sentence. More generally, it is a platitude that the semantic features that inscribes and sounds have in a population of speakers are, at least, partly determined by the ‘propositional attitudes’ those speakers have in using those inscriptions and sounds or in using the parts and structures that compose them. This is the same platitude, of course, which says that meaning depends at least partly on use: For the use in question is intentional use in communicative behaviour. So, here, is one clear sense in which language is dependent on thought: Thought is required to imbue inscriptions and sounds with the semantic features they have in populations of speakers.

The sense in which language does depend on thought can be wedded to the sense in which language does not depend on thought in the ways that: We can say that a sequence of ascriptions or sounds (or, whatever) σ means ‘q’ in a language ‘L’, construed as a function from expressions onto meaning, iff L(σ) = q. this notion of meaning-in-a-language, like the notion of a ;language, is a mere set-theoretic notion that is independent of thought in that it presupposes nothing about the propositional attitudes of language users: σ can mean ‘q’ in ‘L’ even if ‘L’ has never been used? But then we can say that σ also means ‘q’ in a population ‘P’ jus t in case members of ‘P’ use some language in which σ ,means ‘q’: That is, just in case some such language is a language of ‘P’. The question of moment then becomes: What relation must a population ‘P’ bear to a language ‘L’ in order for it to be the case that ‘L’ is a language of ‘P’, a language members of ‘P’ actually speak? Whatever the answer to this question is, this much seems right: In order for a language to be a language of a population of speakers, those speakers in their produce sentences of the language in their communicative behaviour. Since such behaviour is intentional, we know that the notion of a language‘s being the language of a population of speakers presupposes the notion of thought. And since that notion presupposes the notion of thought, we also know that the same is true of the correct account of the semantic features expressions have in populations of speakers.

This is a pretty thin result, not one likely to be disputed, and the difficult questions remain. We know that there is some relation ‘R’ such that a language ‘L’ is used by a population ‘P’ iff ‘L’ bears ‘R’ to ‘P’. Let us call this relation, whatever it turns out to be, the ‘actual-language reflation’. We know that to explain the actual-language relation is to explain the semantic features expressions have among those who are apt to produce those expressions. And we know that any account of the relation must require language users to have certain propositional attitudes. But how exactly is the actual language relation to be explained in terms of the propositional attitude of language users? And what sort of dependence might those propositional attitudes in turn have those propositional attitudes in turn have on language or on the semantic features that are fixed by the actual-language relation? Let us, least of mention, begin once again, as in the relation of language to thought, before turning to the relation of thought to language.

All must agree that the actual-language relation, and with it the semantic features linguistic items have among speakers, is at least, partly determined by the propositional attitudes of language users. This still leaves plenty of room for philosophers to disagree both about the extent of the determination and the nature of the determining propositional attitude. At one end of the determination spectrum, we have those who hold that the actual-language relation is wholly definable in terms of non-semantic propositional attitudes. This position in logical space is most famously occupied by the programme, sometimes called ‘intention-based semantics’, of the late Paul Grice and others. The foundational notion in this enterprise is a certain notion of speaker meaning. It is the species of communicative behaviour reported when we say, for example, that in uttering ‘ll pleut’, Pierre meant that it was raining, or that in waving her hand, the Queen meant that you were to leave the room, intentional-based semantics seeks to define this notion of speaker meaning wholly in terms of communicators’ audience-directed intentions and without recourse to any semantic notion. Then it seeks to define the actual-language relation in terms of the now-defined notion of speaker meaning, together with certain ancillary notions such as that of a conventional regularity or practice, themselves defined wholly in terms of non-semantic propositional attitudes. The definition of the actual-language relation in terms of speaker meaning will require the prior definition in terms of speaker meaning of other agent-semantic notions, such as the notions of speaker reference and notions of illocutionary act, and this, too, is part of the intention-based semantics.

Some philosophers object to the intentional-based semantics because they think it precludes a dependence of thought on the communicative use of language. This is a mistake. Even if the intentional-based semantic definitions are given a strong reductionist reading, as saying that public-language semantic properties (i.e., those semantic properties that supervene on use in communicative behaviour) it might still be that one could not have propositional attitudes unless one had mastery of a public-language. However, our generating causal explanatory y generalizations, and subject to no more than the epistemic indeterminacy of other such terms. The causal explanatory approach to reason-giving explanations also requires an account of the intentional content of our psychological states, which makes it possible for such content to be doing such work. By the early 1970s, and many physicalists looked for a way of characterizing the primary and priority of the physical that is free from reductionist implications. As we have in attestation, the key attraction of supervenience to physicalists has been its promise to deliver dependence without reduction. For example, of moral theory has seemed encouraging as Moore and Hare, who made much of the supervenience of the moral on the naturalistic, were at the same time, strong critics of ethical naturalism, the principal reductionist position in ethical theory. And there has been a broad consensus among ethical theorists that Moore and Hare were right, that the moral, or more broadly the normative, is supervening on the non-moral without being reducible to it. Whether or not this is plausible (that is a separate question), it would be no more logically puzzling than the idea that one could not have any propositional attitudes unless one had one’s with certain sorts of contents. there is no pressing reason to think that the semantic needs to be definable in terms of the psychological. Many intention-based semantic theorists have been motivated by a strong version of ‘physicalism’, which requires the reduction of all intentional properties (i.e., all semantic and propositional-attitude properties) to physical , or at least, topic-neutral or functional properties, for it is plausible that there could be no reduction of the semantic and the psychological to the physical without a prior reduction of the semantic to the psychological. But it is arguable that such a strong version of physicalism is not what is required in order to fit the intentional into the natural order.

So, the most reasonable view about the actual-language relation is that it requires language users to have certain propositional attitudes, but there is no prospect of defining the relation wholly in terms of non-semantic propositional attitudes. It is further plausible that any account of the actual-language relation must appeal to speech acts such as speaker meaning, where the correct account of these speech acts is irreducibly semantic (they will fail to supervene on the non-semantic propositional altitudes of speakers in the way that intentions fail to supervene on an agent’s beliefs and desires). If this is right, it would still leave a further issue about the ‘definability’ of the actual-language relation, and if so, will any irreducibly semantic notions enter into that definition other than the sorts of speech act notions already alluded to? These questions have not been much discussed in the literature as there is neither an established answer nor competing school of thought. Such that the things in philosophy that can be defined, and that speech act notions are the only irreducibly semantic notions the definition must appeal to.

Our attention is now to consider on or upon the dependence of thought on language, as this the claim that propositional attitudes are relations to linguistic items which obtain at least, partly by virtue of the content those items have among language users. This position does not imply that believers have to be language users, but it does make language an essential ingredient in the concept of belief. However, we might then learn that we have no principled basis for ascribing propositional content to who does not speak something, a lot like, does not have internal states with natural-language-like structure. It is somewhat surprising how little we know about thought’s dependence on language.

The Scottish philosopher, born in Edinburgh, David Hume (1711-76 ) whose theory of knowledge starts from the distinction between perception and thought. When we see, hear, feel, etc. (In general, perceive) something we are ware of something immediately present to the mind through the senses. But we can also think and believe and reason about things which are not present to our senses at the time, e.g., objects and events in the past, the future or the present beyond our current perceptual experience. Such beliefs make it possible for us to deliberate and so act on the basis of information we have acquired about the world.

For Hume all mental activity involves the presence before the mind o some mental entity. Perception is said to differ for thought only in that the kinds of things that are present to the mind in each case are present to the mind in each case are different. In the case of perception it is an ‘impression’: In the case of thought, although what is thought about is absent, what is present to the mind is an ‘idea’ of whatever is thought about. The only difference between an impression and its corresponding idea is the greater ‘force and liveliness’ with which it ‘strikes upon the mind’.

All the things that we can think or believe or reason about are either ‘relations of ideas’ or ‘matters of fact’. Each of the former (e.g., that three times five equals half of thirty) holds necessarily: Its negation implies a contradiction, such truths are ‘discoverable by the operation of pure thought, without dependence on what is anywhere existent in the universe. Hume has no systematic theory of this kind of knowledge: What is or is not included in a given idea, and how we know whether it is, is taken as largely unproblematic. each ‘matter of fact’ is contingent: Its negation is distinctly conceivable and represents a possibility. That the sun will not rise tomorrow is no less intelligible and no more implies a contradiction than the proposition that it will rise. Thought alone is therefore, never sufficient to assure us of the truth of any matter of fact. Sense experience is needed. Only what is directly present to the senses at a given moment is known by perception. A belief in a matter of fact which is not present at the time must therefore be arrived at by a transition of some kind from present impressions to a belief in the matter of fact in question. Hume’s theory of knowledge is primarily an explanation of how that transition is in fact made. It takes the form of an empirical ‘science of human nature’ which is to be based of careful observation of what human beings do and what happens to them.

Its leading into some tangible value, that approves inversely qualifying, in that thoughts have contents carried by mental representations. Now, there are different representations, pictures, maps, models, and words - to name only some. Exactly what sort of representation is mental representation? Insofar as our understanding of cognizant connectionism will necessarily have implications for philosophy of mind. Two areas in particular on which it is likely to have impact are the analysis of the mind as a representational system and the analysis of intentional idioms. That is more, that imagery has played an enormously important role in philosophy conceptions of the mind. The most popular view of images prior to this century has been what we might call ‘the picture theory’. According to this view, held by such diverse philosophers as Aristotle, Descartes, and Locke, mental images - specifically in the way they represent objects in the world,. Despite its widespread acceptance, the picture theory of mental images was left largely unexplained in the traditional philosophical literature. Admittedly, most of those accepted the theory held that mental images copy or resemble what the present, but little more was said. Sensationalism, distinguishes itself as a version of representationalist by positing that mental representations are themselves linguistic expressions within a ‘language of thought’. While some sententialists conjecture that the language of thought is just the thinker’s spoken language internalized. An unarticulated, internal; language in which the computations supposedly definitive of cognition occur. Sententialism is as a natural consequence to take hold a provocative thesis.

Thoughts, in having contents, posses semantic properties, yet, that does not imply that they lack an unspoken, internal, mental language. Sententialism need not insist that the language of thought be any natural spoken language like Chinese or English. Rather it simply proses that psychological states that admit of the sort of semantic properties are likely relations to the sort of structured representations commonly found in, but not isolated to, public languages. This is certainly not to say that all psychological states in all sorts of psychological agents must be relations to mental sentences. Rather the idea is that thinking - at least, the kind Peter Abelard (1079-1142) exemplifies - involves the processing of internally complex representations. Their semantic properties are sentences to those of their parts much in the manner in which the meanings and truth conditions of complex public sentences are dependent upon the semantic features of their components. Abelard might also exploit various kinds of mental representations and associated processes. A sententialists may allow that in some of his cognitive adventures Abelard rotates mental images or recalcitrates weights on connections among internally undifferentiated networked nodes. Sententialism is simply the thesis that some kinds of cognitive phenomena are best explained by the hypothesis of a mental language. There is, then, no principled reason of non-verbal creatures precludes the language of thought.

It is tempting to gloss over the representational theory by speaking of a language thought, nonetheless, that Fodor argues that representation and the inferential manipulation of representations require a medium of representation, least of mention, in human subjects than in computers. Say, that physically realized thoughts and mental representations are ‘linguistic’, such that of (1) they are composed of parts and are syntactically structured: (2) Their simplest parts refer to or denote things and properties in the world, (3) their meanings as wholes are determined by the semantical properties of their basic parts together with the grammatical rules that have generated their overall syntactic structures, (4) they have truth-conditions, that is, putative states of affairs in the world that would make them true, and accordingly they are true or false depending on the way the world happens actually to be: (5) They bear logical relations of entailment or implication to each other. In this way, they have according to the representational theory: Human beings have systems of physical states that serve as the elements of a lexicon or vocabulary, and human beings (somehow) physically realize rules that combine strings of those elements into configuration having the plexuities of representational contents that common sense associates with the propositional altitudes. And that is why thoughts and beliefs are true or false just as English sentences are, though a ‘language of thought’ may differ sharply in its grammar from any natural language.

We know that there is some relation R such that a language L is used by a population P iff L bears R to P. This relation, however, of whatever it turns out to be, the actual-language relation is to explain the semantic features expressions, least of mention, have among those who are apt to produce those expressions, and we know that any account of the relation must require language users to have certain propositional attitudes. But how exactly is the actual-language relation to be explained in terms of the propositional attitudes of language users? And what sort of dependence might those propositional attitudes in turn have on language or on the semantic features that are fixed by the actual-language relation?

Some philosophers object to intention-based semantics only because they think it precludes a dependence of thought on the communicative use of language. This is a mistake. Even if intention-based semantic definitions are given a strong reductionist reading, as saying that public-language semantic properties (i.e., those semantic properties that supervene on us in communicative behaviour) just are psychological properties. It might still be that one could not have propositional attitudes unless one had mastery of a public language. The idea of supervenience is usually thought to have originated in moral theory, in the works of such philosopher s as G.E. Moore and R.M. Hare, nonetheless, Hare, for example, claimed that ethical predicates are ‘supervenient predicates’ in the same sense that no two things (persons, acts, states of affairs) could be exactly alike in all descriptive or naturalistic respects but unlike in that some ethical predicate (‘good’, right’, etc.) truly applies to one but not to the other. That is, there could be no difference in a moral respect without a difference in some description, or non-moral respect. following Moore and Hare, from whom they avowedly borrowed the idea of supervenience, Davidson went on to assert that supervenience in the sense is consistent with the irreducibility of the supervenient to their ‘subvenient’, or ‘base’, properties. ‘Dependence or supervenience of this kind does not entail reducibility through law or definition . . . ’.Thus, three ideas have come to be closely associated with supervenience: (1) ‘Property covariation’ (if two things are indiscernible in base properties, they must be indiscernible in supervenience properties). (2) ‘Dependence’ (supervenient properties are dependent on, or determined by, their subvenient bases, and (3) ‘Non-reducibility’ (property covariation and dependence involved in supervenience can not reducible to their base properties). Whether or not this is plausible (that is, a separate question), it would be no more logically puzzling that the idea that one could not have propositional attitudes unless one had ones with certain sorts of content, Tyler Burge’s insight, that the contents of one’s thoughts is partially determined by the meaning of one’s words on one’s linguistic community is perfectly consistent with any intention-based semantics, reduction of the semantic to the psychological. Nevertheless, there is reason to be sceptical of the intention-based semantic programme.

So the most reasonable view about the actual-language relation is that it requires language users to have certain propositional attitudes, but there is no prospect of defining the relation wholly in terms of non-semantic propositional attitudes. It is further plausible that any account of the actual-language relation ,must appeal to speech acts such as speaker meaning, where the correct account of these speech acts is irreducibly semantic (they will fail to supervene on the non-semantic propositional attitudes of speakers in the way that intentions fail to supervene on an agent’s beliefs and desires). Is it possible to define the actual-language relation, and if so, will any irreducibly semantic notions enter into that definition other than the sorts of speech act notions already alluded to? These questions have not been much discussed in the literature, there is neither an established answer nor competing schools of thought. However, the actual-language relation is one of the few things in philosophy that can be defined, and that speech act notions are the only irreducibly semantic notions the definition must appeal to (Schiffer, 1993).

An substantiated dependence of thought on language seems unobtainably approachable, however, a useful point is an acclaimed dependence that propositional attitudes are relations to linguistic items which obtain, in, at least, in part, by virtue of the content those items have among language users. This position does not imply that believers have to be language users, but it does make language an essential ingredient in the concept of belief. The position is motivated by two considerations: (a) The supposition that believing is a relation to thing believed, which things have truth values and stand in logical relations to one another, and (b) the desire not to take things believed to be propositions - abstract, mind and language-independent objects that have essentially the truth conditions they have. As to say that (as well motivated: The relational construal of propositional attitudes is probably the best way to account for the quantification in ‘Harvey believes something irregular about you’. But there are problems with taking linguistic items, than propositions, as the objects of belief. In that, if ‘Harvey believes that irregularities are founded grounds held to abnormality’ is represented along the lines of Harvey, and abnormal associations founded to irregularity, then one could know the truth expressed by the sentence about Harvey without knowing the content of his belief: For one could know that he stands in the belief relation to ‘irregularities are abnormal’ without knowing its content. This is unacceptable, as if Harvey believes that irregularity stems from abnormality, then what he believes - the reference of ‘That irregularity is abnormal’ - is that irregularities are abnormal. But what is this thing, that irregularities are abnormal? Well, it is abstract, in that it has no spatial locality: It is mind and language independent, in that it exists in possible world in which whose displacement is neither the thinkers nor speakers, and necessarily, it is true iff irregularly is abnormal. In short, it is a proposition - an abstract mind and-language thing that has a truth condition and has essentially the truth condition it has.

A more plausible way that thought depends on language is suggested by the topical thesis that we think in a ‘language of thought’. As, perhaps, this is nothing more than the vague idea that the neural states that realize our thoughts ‘have elements and structure in a way that is analogous to the way in which sentences have elements and structure’. But we can get a more literal rendering by relating it to the abstractive conception of language already recommended. On this conception, a language is a function from ‘expressions’ - sequence of marks or sounds or neural states or whatever - onto meanings, which meanings will include the propositions our propositional-attitude relations relates us to. We could then read the language of thought hypothesis as the claim that having in a certain relation to a language whose expressions are neural states. There would mow be more than one ‘actual-language relation’. One might be called the ’public-language relation’, since it makes a language the instrument of communication of a population of speakers. Another relation might be called the ‘language-of-thought relation’ because standing in the relation to a language makes it one’s ‘Lingus mentis’. Since the abstract notion of a language has been so weakly construed, it is hard to see how the minimal language-of-thought proposal just sketched could fail to be true. At the same time, it has been given no interesting work to do. In trying to give it more interesting work, further dependencies of thought on language might come into play. For example, it has been claimed that the language of thought of a public-language user is the public language she uses: her neural sentences in something like her spoken sentences. For another example, it might be claimed that even if one’s language of thought is distinct from one’s public language, the language-of-thought relation makes presuppositions about the public-language relation in ways that make the content of one’s thoughts dependent on the meaning of one’s words in one’s public-language community.

All of this suggests a specific ‘mental organ’, to use Chomsky’s phrase, that has evolved in the human cognitive system specifically in order to make language possible. The specific structure of this organ simultaneously constrains the range of possible human languages and guides the learning of the child’s target language, later ,making rapid on-line language processing possible. The principles represented in this organ constitute the innate linguistic knowledge of the human being. Additional evidence for the early operation of such an innate language acquisition module is derived from the many infant studies that show that infants selectively attend to sound-streams that are prosodically appropriate, that have pauses at clausal boundaries, and that contain linguistically permissible phonological sequences.

A particularly strong form of the innateness hypothesis in the psycholinguistic domain is Fodor’s (1975, 1987), ‘Language of Thought’ hypothesis. Fodor argues not only that the language learning and processing faculty is innate, but that the human representational system exploits an innate language of thought which has all of the expressive power of any learnable human language. Hence, he argues, all concepts are in fact innate, in virtue of the representational power of the language of thought. This remarkable doctrine is hence even stronger than classical rationalist doctrine of innate ideas: Whereas, Chomsky echoes Descartes in arguing that the most general concepts required for language learning are innate, while allowing that more specific concepts are acquired, Fodor echoes Plato in arguing that every concept we ever ‘learn’ is in fact innate.

Fodor defends this view by arguing that the process of language learning is a process of hypothesis formation and testing, where among the hypotheses that must be formulated are meaning postulates for each term in the language being acquired. But in order to formulate and test a hypothesis of the form ‘χ’ means ‘y’, where ‘χ’ denotes a term in the target language, prior to the acquisition of that language, the language learner. Fodor argues, must have the resources necessary to express ‘y’. Therefore, there must be, in the language of thought, a predicate available co-extensive with each predicate in any language that a human can learn. Fodor also argues for the language of thought thesis by noting that the language in which the human information cannot be a human spoken language, since that would, contrary to fact, privilege one of the world’s languages as the most easily acquired. Moreover, it cannot be, he argues, that each of us thinks in our own native language since that would (a) predict that we could not think prior to acquiring a language, contrary to the original argument, and (b) would mean that psychology would be radically different for speakers of different languages. Hence, Fodor argues, there must be a non-conventional language of thought, and the facts that the mind is ‘wired’ in mastery of its predicates together with its expressive completeness entail that all concepts are innate.

The dissertating disputation about whether there are innate qualities that infer on or upon the innate values whereby ideas are much older than previously imagined. Plato in the ‘Meno’ (the learning paradox), famously argues that all of our knowledge is innate. Descartes (1596-1650) and Leibniz (1646-1716) defended the view that the mind contains innate ideas: Berkeley (1685-1753), Hume (1711-76) and Locke (1632-1704) attacked it. In fact, as we now conceive the great debate between European Rationalism and British empiricism in the seventeenth and eighteenth centuries, the doctrine of innate ideas is a central effectuality of contention: Rationalists typically claim that knowledge is impossible without a significant stock of general innate ‘concepts’ or judgements, empiricists argued that all ideas are acquired from experience. This debate is replayed with more empirical content and with considerably greater conceptual complexities in contemporary cognitive science, most particularly within the domain of psycholinguistic theory and cognitive developmental theory. Although Chomsky is recognized as one of the main forces in the overthrow of behaviourism and in the initiation of the ‘cognitive era’. His relation between psycholinguistics and cognitive psychology has always been an uneasy one. The term ‘psycholinguistics’ is often taken to refer primarily to psychological work on language that is influenced by ideas from linguistic theory. Mainstream cognitive psychologists, for example when they write textbooks, oftentimes prefer the term ‘psychology of language’ the difference is not, however, merely in a name, least be of mention, that both Fodor and Chomsky, who argue that all concepts, or all of linguistic knowledge is innate, lend themselves to this interpretation, against empiricists who argue that there is no innate appeal in explaining the acquisition of language or the facts of cognitive development. But this debate would be a silly and a sterile for obvious reasons, something is innate. Brains are innate, and the structure of the brain must constrain the nature of cognitive and linguistic development to dome degree. Equally obviously, something is learned and is learned as opposed to merely grown as limbs or hair grow. For not all of the world’s citizens end up speaking English, or knowing the Special Theory of Relativity. The interesting questions then all concern exactly what is innate, to what degree it counts as knowledge, and what is learned, and what degree its content and structure are determined by innately specified cognitive structures. And that is plenty to debate about.

Innatist argue that the very presence of linguistic universals argue for the innateness of linguistic knowledge, but more importantly and more compelling that the fact that these universals are, from the standpoint of communicative efficiency, or from the standpoint of any plausible simplicity criterion, adventitious. There are many conceivable grammars, and those determined by universal grammar are not ipso facto the most efficient or the simplest. Nonetheless, all human language satisfy the constraints of universal grammar. Since neither the communicative environment nor the commutative task can explain this phenomenon. It is reasonable to suppose that it is explained by the structure of the mind - and, therefore, by fact that the principles of universal grammar lie innate in the mind and constrain the language that a human can acquire.

Linguistic empiricists, answer that there are alternative possible explanations of the existence of such adventitious universal properties of human languages. For one thing, such universals could be explained, Putnam (1975, 1992) argues, by appeal to a common ancestral language, and the inheritance of features of that language by its descendants. Or it might turn out that despite the lack of direct evidence at present the features of universal grammar in fact do serve either the goals of communicative efficacy or simplicity according to a metric of psychological importance. Finally, empiricist point out , he very existence of universal grammar might be a trivial logical artefact (Quine, 1968): for one thing, any finite set of structures will have some feature s in common. Since there are a finite number of languages, it follows trivially that there are features they all share. Moreover, it is argued, many features of universal grammar are interdependent. So in fact the set of functional principles shared by the world’s languages may be rather small. Hence, even if these are innately determined, the amount of innate knowledge thereby required may be quite small as compared with the total corpus of general linguistic knowledge acquired by the first language learner.

These replies are rendered less plausible, innatists argue, when one considers the fact that the errors language learners make in acquiring their first language seem to be driven far more by abstract features of grammar than by any available input data. So, despite receiving correct examples of irregular plurals or past tense forms for verbs, and despite having correctly formed the irregular forms for those words, children will often incorrectly regularize irregular verbs once acquiring mastery of the rule governing regulars in their language. And in general, not only the correct inductions of linguistic rules by young language learners, but more importantly, given the absence of confirmatory data and the presence of refuting data, children’s erroneous inductions are always consistent with universal grammar, often simply representing the incorrect setting of a parameter in the grammar. More generally, innatists argue, that all grammatical rules that have ever been observed satisfy the structure-dependence constraint. That is, many linguists and psycholinguists argue that all known grammatical rules of all the world’s languages, including the fragmentary languages of young children must be stated as rules governing hierarchical sentence structures, and not governing, say, sequence of words. Many of these, such as the constituent-command constraint governing anaphor, are highly abstract indeed, and appear to be respected by even very young children (Solan, 1983 & Crain, 1991). Such constraints may, innatists argue, be necessary conditions of learning natural language I the absence of specific instruction, modelling and correction conditions in which all first language learning acquire their native languages.

An important empiricist answer for these observations derives from recent studies of ‘connectionist’ models of the first language acquisition (Rummelhart & McClelland, 1986, 1987). Connectionist systems, not previously trained to represent any sunset of universal grammar that induce grammar which include a large set of regular forms and a few irregulars also tend to over-regularize, exhibiting the same U-shape learning curve seen in human language acquirers. It is also noteworthy that conceptionist learning systems that induce grammatical systems acquire ‘accidentally’ rules on which they are not explicitly trained, but which are consistent with those upon which they are trained, suggesting that s children acquire position of their grammar, they may accidentally ‘learn’ other consistent rules, which may be correct in other human language, but which then must be ‘unlearned’ in their home language. Yet, such ‘empiricist’ language acquisition systems have yet to demonstrate their ability to induce a sufficiently wide range of the rules hypothesized to be comprised by universal grammar to constitute a definite empirical argument for the possibility of natural language acquisition in the absence of a powerful set of innate constraints.

The poverty of the stimulus argument has been of enormous influence in innateness debates, though its soundness is hotly contested. Chomsky notes that (1) the examples of the target language to which the language learner is exposed are always jointly compatible with an infinite number of alternative grammars, and so vastly undermine the grammar, of the language, and (2) the corpus always contains many examples of ungrammatical sentences, which should in fact, serve as falsifiers of any empirically induced correct grammar of the language, also (3) there is, in general, no explicit reinforcement of correct utterances or correction of incorrect utterances, either by the learner or by those in the immediate training environment. Therefore, he argues, since it is impossible to explain the learning of the correct grammar - a task accomplished by all normal children within a very few years - on the basis of any available data or known learning algorithms, it must be that the grammar is innately specified, and is merely ‘triggered’ by relevant environmental cues.

Opponents of the linguistic innateness hypothesis, however, point out that the circumstance that Chomsky notes in this argument is hardly specific to language. As well known from arguments due to Hume (1978). Wittgenstein (1953), Goodman (1972) and Kripke (1982), in all cases of empirical abduction, and of training in the use of a word, data under-determine theories. This moral is emphasized by Quine (1954, 1960) as the principle of the undertermination of theory by data. But we, nonetheless, do abduce adequate theories in science, and we do lean the meaning of words. And it would be bizarre to suggest that all correct scientific theories or the facts of lexical semantics are innate.

But, innatists reply, that when the empiricist relies on the underdetermination of theory by data as a counterexample, a significant disanalogousness with language acquisition is ignored: The abduction of scientific theories is a difficult, labourious process, taking a sophisticated theorist a great deal of time and deliberate effort. First language acquisition, by contrast, is accomplished effortlessly and very quickly by a small child. The enormous relative ease with which such a complex and abstractive domain is mastered by such a naïve ‘theorist’ is evidence for the innateness of the knowledge achieved.

Empiricists such as Putnam (1926- ) have rejoined that innateness under-estimate the amount of time that language learning actually takes, focussing only on the number of years from the apparent onset of acquisition to the achievement of relative mastery over the grammar. Instead of noting how short this interval, they argue, one should count the total number of hours spent listening to language and speaking during this time. That number is in fact, quite large, and is comparable to the number of hours of study and practice required in the acquisition of skills that are not argued to derive from innate structures, such as chess playing or musical composition, hence, they argue once the correct temporal parameters are taken into consideration, language learning looks like one more case of human skill acquisition than like a special unfolding of innate knowledge.

Innatists, however, note that while the ease with which most such skills are acquired depends on general intelligence, language, is learned with roughly equal speed, and to roughly the same level of general syntactic mastery regardless of general intelligence. In fact, even significantly retarded individuals, assuming no special language deficit, acquire their native language on a time-scale and to a degree comparable to that of normally intelligent children. The language acquisition faculty hence, appears to allow access to a sophisticated body of knowledge independent of the sophistication of the general knowledge of the language learner. This is, language learning and utilization mechanisms are not outside of language processing. They are informationally encapsulated - only linguistic information is relevant to language acquisition and processing. They are mandatory - language learning and language processing are automatic. Moreover, language is subserved by specific dedicated neural structures, damage to which predictably and systematically impairs linguistic functioning, and not general cognitive functioning.

Again, the issues at stake in the debate concerning the innateness of such general concepts pertaining to the physical world cannot be s stark a dispute between an innate and one according to which all empirical knowledge is innate. Rather the important - and again, always empirical questions concern just what is innate, and just ‘what’ is acquired, and how innate equipment interacts with the world to produce experience. ‘There can be no doubt that all our knowledge begins with experience . . . experience it does not follow that all arises out of experience’.

Philosophically, the unconscious mind postulated by psychoanalysis is controversial, since it requires thinking in terms of a partitioned mind and applying a mental vocabulary (intentions, desires, repression) to a part to which we have no conscious access. The problem is whether this merely uses a harmless spatial metaphor of the mind, or whether it involves a philosophical misunderstanding of mental ascription. Other philosophical reservations about psychoanalysis concern the apparently arbitrary and unfalsifiable nature on the interpretative schemes employed. Basically, least of mention, the method of psychoanalysis or psychoanalytic therapy for psychological disorders was pioneered by Sigmund Freud (1856-1939), the method relies on or upon an interpretation of what a patient says while ‘freely associating’ or reporting what comes to mind in connection with topics suggested by the analyst. The interpretation proceeds according to the scheme favoured by the analyst, and reveals ideas dominating the unconscious, but previously inadmissible to the conscious mind of the subject. When these are confronted, improvement can be expected. The widespread practice of psychoanalysis is not matched by established data on such rate of improvement.

Nonetheless, the task of analysing psychoanalytic explanation is complicated is initially in several ways. One concerns the relation of theory to practice. There are various perspectives on the relation of psychoanalysis, the therapeutic practice, to the theoretical apparatus built around it, and these lead to different views of psychoanalysis’ claim to cognitive status. The second concerns psychoanalysis’ legitimation. The way that psychoanalytic explanation is understood has immediate implications for one’s view of its truth or acceptability, and this of course a notoriously controversial matter. The third is exegetical. Any philosophical; account of psychoanalysis must of course start with Freud himself, but it will inevitably privilege some strands of his thought at the expense of others, and in so doing favour particular post-Freudian developments over others.

Freud clearly regarded psychoanalysis as engaged principally in the task of explanation, and held fast to his claims for its truth in the course of alterations in his view of the efficacy of psychoanalysis’ advocates have, under pressure, retreated to the view that psychoanalytic theory has merely instrumental value, as facilitating psychoanalytic therapy: But this is not the natural view, which is that explanation is the autonomous goal of psychoanalysis, and that its propositions are truth-evaluable. Accordingly, it seems that preference should be given to whatever reconstruction of psychoanalytic theory does most to advance its claim to truth. Within, of course, exegetical constraints (what a reconstruction offers must be visibly present in Freud’s writings.)

Viewed in these terms, psychoanalytic explanation is an ‘extension’ of ordinary psychology, one that is warranted by demands for explanation generated from within ordinary psychology itself. This has several crucial ramifications. It eliminates, as ill-conceived, the question of psychoanalysis’ scientific status - an issue much discussed, as proponents of different philosophies of science have argued for and against psychoanalysis’ agreement with the canons of scientific method, and its degree or lack of correspondence. Demands that psychoanalytic explanation should be demonstrated to receive inductive support, commit itself to testable psychological laws, and contribute effectively to the prediction of action, have then no more pertinence than the same demands pressed on ordinary psychology - which is not very great. When the conditions for legitimacy are appropriately scaled down. It is extremely likely that psychoanalysis succeeds in meeting hem: For psychoanalysis does deepen our understanding of psychological laws, improve the predictability of action in principle, and receive inductive support on the special sense which is appropriate to interpretative practices.

Furthermore, to the extent that psychoanalysis may be seen as structured by and serving well-defined needs for explanation, there is proportionately diminished reason for thinking that its legitimation turns on the analysand’s assent to psychoanalytic interpretation, or the transformative power (whatever it may be) of these. Certainly it is true that psychoanalytic explanation has a reflective dimension lacked by explanations in the physical sciences: Psychoanalysis understands its object, the mind, in the very terms that the mind employs in its unconscious workings (such as its belief in its own omnipotence). But this point does not in any way count against the objectivity of psychoanalytic explanation. It does not imply that what it is for a psychoanalytic explanation to be true should be identified, pragmatically, with the fact that an interpretation may, for the analysand who gains self-knowledge, have the function of translating their directed-causes to set about unconscious mentality into a proper conceptual form. Nor does it imply that psychoanalysis’ attribution of unconscious content needs to be understood in anything less than full-bloodedly realistic terms. =truth in psychoanalysis may be taken to consist in correspondence with an independent mental reality, a reality that is both endorsed with ‘subjectivity’ and in many respects puzzling to its owner.

In the twentieth-century, the last major, self-consciously naturalistic school of philosophy was American ‘pragmatism’ as exemplified particularly in the works of John Dewey (1859-1952). The pragmatists replaced traditional metaphysics and epistemology with theories and methods of the sciences, and grounded their view of human life in Darwin’s biology. Following the second world war, pragmatism was eclipsed by logical positivism and what might be called ‘scientific’ positivism, a philosophy of science as the defining characteristic of all scientific statements. Ernst Mach is frequently regarded as the founder of logical positivism, however, in his book The Conservation of Energy, that only the objects of sense experience have any role in science: The task of physics is ‘the discovery of the laws of the connection of sensations (perceptions): And ‘the intuition of space is bound up with the organization of the senses . . . (so that) we are not justified in ascribing spatial properties to things which are not perceived by the senses’. Thus, for Mach, our knowledge of the physical world is derived entirely from sense experience, and the content of science is entirely characterized by the relationships among the data of our experience.

Nevertheless, pragmatism is a going concern in philosophy of science. It is often aligned with he view that scientific theories are not true or false, but are better or worse instruments for prediction and control. For Charles Peirce (1839-1914) identifies truth itself with a kind of instrumentality. A true belief is the very best we could do by way of accounting for the experiences we have, predicting the future course of experience, etc.

Peirce (1834-1914) called the sort of inference which concludes that all A’s are B’s because there are no known instances to the contrary ‘crude induction’. It assumes that future experience will not be ‘utterly at variance’ with past experience. This is, Peirce says, the only kind of induction in which we are able to infer the truth of a universal generalization. Its flaw is that ‘it is liable at any moment to be utterly shattered by a single experience’, that is to say, that warranted belief is possible only at the observational level. Induction tells us what theories are empirically successful, and thereby what explanations are successful. But the success of an explanation cannot, for historical reasons, be taken as an indicator of its truth.

The thesis that the goal of inquiry is permanently settled belief, and the thesis that the scientific attitude is a disinterested desire for truth, are united by Peirce’s definition of ‘true’. He does not think it false to say that truth is correspondence to reality, but shallow - a merely nominal definition, giving no insight into the concept. His pragmatic definition identifies the truth with the hypothetical ideal, which would be the final outcome of scientific inquiry were it to continue indefinitely. ‘Truth is that concordance of . . . [a] statement beliefs’: any truth more perfect than this destined conclusion, any reality more absolute than what is thought in it, is a fiction of metaphysics’. These reveal something both of the subtlety and of the potential for tension, without Peirce’s philosophy. His account of reality aims at a delicate compromise between the undesirable extremes of transcendentalism and idealism, his account of truth at a delicate compromise between the twin desiderata of objectivity and (in-principle) accessibility.

The question of what is and what is not philosophy is not a simply a query of classification. In philosophy, the concepts with which we approach the world themselves become the topic of enquiry. A philosophy of a discipline such as history, physics, or law seeks not so much to solve historical, physical, or legal questions, as to study the concepts that structure such thinking,. And to lay bare their foundations and presuppositions. In this sense philosophy is what happens when a practice becomes self-conscious. The borderline between such ‘second-order’ reflection, and, ways of practising the first-order discipline itself, is not always clear: Philosophical problems may be tamed by the advance of a discipline, and the conduct of a discipline may be swayed by philosophical reflection. But the doctrine neglects the fact that self-consciousness and reflection co-exist with activity. At different times there has been more or less optimism about the possibility of a pure or ‘first’ philosophy, taking from the stand-point from which other intellectual practices can be impartially assessed and subjected to logical evaluation and correction, in that he task of the philosopher of a discipline would then be to reveal the correct method and to unmask counterfeits. Although this belief lay behind much ‘positivist’ philosophy of science, few philosophers now subscribe to it. The contemporary spirit of the subject is hostile to any such possibility, and prefers to see philosophical reflection as continuous with the best practising employment of intellectual fields of rationalizations intended reasons for enquiry.

Nonetheless, the last two decades have been an intermittent interval of extraordinary change in psychology. Cognitive psychology, which focuses on higher mental processes like reasoning, decision making, problem solving, language processing and higher-level visual processing, has become a - perhaps the - dominant paradigm among experimental psychologists, while behaviouristic oriented approaches have gradually fallen into disfavour. Largely as a result of this paradigm shift, the level of interaction between the disciplines of philosophy and psychology has increased dramatically.

One of the central goals of the philosophy of science is to provide explicit and systematic accounts of the theories and explanatory strategies exploited in the sciences. Another common goal is to construct philosophically illuminating analyses or explications of central theoretical concepts invoked in one or another science. In the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and there has been a great deal of work on the structure of evolutionary theory and on such crucial conceptual perspectives proposed in biological function.

Typically, a functional explanation in biology says that an organ χ’ is present in an animal because ‘χ’ has function ‘F’. What does that mean?

Some philosophers maintain that an activity of an organ counts as a function only if the ancestors of the organ’s owner were naturally selected, partly because they had similar organs that performed the same activity. Thus, the historical-causal property, having conferred a selective advantage, is not just evidence that ‘F’ is a function, it is constitutive of F’s being purposively functional.

If this reductive analysis is right, a functional explanation turns out to be sketchy causal explanation of the origin of ‘χ’. It makes the explanation scientifically respectable, ‘because’ it indicates a weak relation of partial causal contribution.

However, this construal is not satisfying intuitively. To say that ‘χ’ is present because it has a function is normally taken to mean, roughly, that ‘χ’ is present it is supposed to do something useful. Yet, this normal interpretation immediately makes the explanation scientifically problematic, because the claim that ‘χ’ is supposed to do something useful appears to be normative and non-objective.

The philosophy of physics is another area in which studies of this sort have been actively pursued. In undertaking this work, philosophers need not and do not assume that there is anything wrong with the science they are studying. Their goal is simply to provide accounts of the theories, concepts and explanatorial strategies that scientists are using - accounts that are more explicit, systematic and philosophically sophisticated than the often rather rough-and-ready accounts offered by the scientists themselves.

This account of intentionality is characteristic to perception and action, so that the paradigms that are usually founded of belief or sometimes beliefs and desires are key to understanding intentionality whose representation in a special sense of that word that we can explain intentional states in general, as having both a propositional content and a psychological mode, and the psychological mode which determines the direction with which the intentional state represents its conditions of satisfaction. These considerations are characteristic of all those intentional states with propositional content which do not have a mind-to-world or world-to-mind direction: All of these contain beliefs and desires, and the component beliefs and desires do have an initial direction of fit.

Once, again, of intentionality that the paradigm cases discussed are usually beliefs or sometimes beliefs and desires. However, the biologically most basic forms of intentionality are in perception and intentional action. These also have certain formal features which are not common to beliefs and desires. Consider a case of perception. Suppose I see my hand in front of my face. What are the conditions of satisfaction? First, the perceptual experience of the hand in front of my face has as its condition of satisfaction that there is a hand in front of my face. Thus far the condition of satisfaction is the same as the belief that there is a hand in front of my face. Bu t with perceptual experience there is this difference: In order that the intentional content be satisfied, the fact that there is a hand in front of my face must cause the very experience whose intentional content is that there is a hand in front of my face. This has the consequence that perception has a special kind of condition of satisfaction that we might describe as ‘causally self-referential’. The full conditions of satisfaction of the perceptual experience are, first, that there be a hand in front of my face, and second, that there is a hand in front of my face caused the very experience of whose conditions of satisfaction it forms a part. We can represent this in our canonical form as:

Visual experience (that there is a hand in front of my face

` and the fact that there is a hand in front of my face is causing

this very experience.)

Furthermore, visual experience have a kind o conscious immediacy not characteristic of beliefs and desires. A person can literally be said to have beliefs and desires while sound asleep. But one can only have visual experiences of a non-pathological kind when one is fully awake and conscious because the visual experience are themselves forms of consciousness.

Event memory is a kind of halfway house between the perceptual experience and the belief. Memory, like perceptual experience Has the causally self-referential feature. Unless the memory is caused by the event, of which it is the memory. It is not a case of satisfied memory, but unlike the visual experience, it need not be conscious. One can be said to remember something while sound asleep. Beliefs, memory and perception all have the mind-to-world direction and memory and perception have the world-to-mind direction of causation.

Increasingly, proponents of the intentional theory of perception argue that perceptual experience is to be differentiated from belief not only in terms of attitude, but also in terms of the kind of content the experience is an attitude towards ascribing contents to be in a certain set-class of content-involving states is for attributes of these states to make the subject as rationally intelligible as possibility, in the circumstances. In one form or another, this idea is found in the writings of Davidson (1917-2003), who introduced the position known as ‘anomalous monism’ in the philosophy of mind, instigating a vigorous debate over the relation between mental and physical descriptions of persons, and the possibility of genuine explanation of events in terms of psychological properties. Although Davidson is a defender of the doctrine of the ‘indeterminacy of radical translation and the ‘indisputability of references, his approach has seemed to many to offer some hope of identifying meaning as a respectable notion, even within a broadly ‘extentionalized’ approach to language. Davidson is also known for rejection of the idea of a ‘conceptual scheme’, thought of as something peculiar to one language or one way of looking at the world, arguing that where the possibility of translation stops so does the coherence of the idea that there is anything to translate.

Intentional action has interesting symmetries and asymmetries to perception. Like perceptual experiences, the experiential component of intentional action is causally self-referential. If, for example, I am now walking to my car, then the condition of walking to my car, then experience is that satisfaction of the present experience is that there be certain bodily movements, and that this very experience of acting cause those bodily movements. What is more, like perceptual experience, the experience of acting is typically a conscious mental event. However, unlike the perception memory, the direction of the experience of acting is world-to-mind. My intention will only be fully carried out if the world changes so as to match the content of the intention (hence world-to-mind direction (hence world-to-mind proves directional) and the intention will only be fully satisfied if the intention itself causes the rest of the condition of satisfaction, hence, mind-to-world direction of causation.

Increasingly, proponents of the intentional theory of perception argue that perceptual representational experience is to be differentiated from belief not only in terms of attitude, but, in terms of the kind of content that experience is an attitude toward a better understanding a person’s reasons for the array of emotions and sensations to which he ids subject: What he remembers and what he forges, and how he reasons beyond the confines of minimal rationality. Even the content-involving perceptual states, which take into consideration, a fundamental role in individuating content. This, however, cannot be understood purely in terms relational to minimal rationality. A perception of the world as being a certain way is not, and could not be, under a subject’s rational control. Though it is true and rational that perceptions give reasons for forming beliefs, the beliefs for which they fundamentally provide reasons - observational beliefs about the environment - have contents which can only be elucidated by referring back to perceptual representations belonging of experience. In this respect (as in others), perceptual states differ from those beliefs and desires that are individuated by mentioning that they provide reasons for judging or doing: For frequently, these latter judgements and actions can be individuated without reference back to the states that provide reasons for them.

We are acutely aware of the effects of our own memory, its successes and its failures, so that we have the impression that we know something about how it functionally operates. But, with memory, as with most mental functions, what we are aware of is the outcome of its operation and not the operation itself. To our introspections, the essence of memory is language based and intentional. When we appear as a witness in court then the truth, as we are seen to report it is what we say about what we intentionally retrieve. This is, however, a very restricted view o memory albeit, with a distinguished history. William James (1842-1910), an American psychologist and philosopher, whose own emotional needs gave him an abiding interest in problems of religion, freedom, and ethics: The popularity of these themes and his lucid and accessible style made James the most influential American philosopher of the beginning of the 20th century. Nonetheless, James said, that ‘Memory proper is the knowledge of a former state of mind after it has already once dropped from consciousness, or rather it is the knowledge of an event, or fact, of which meantime we have not been thinking, with the additional consciousness that we have thought or experienced it before’.

One clue to the underlying structure of our memory system might be its evolutionary history. We have no reason to suppose that a special memory system evolved recently or to consider linguistic aspects of memory and intentional recall as primary. Instead, we might assume that such features are later additions to a much more primitive filing system. From this perspective one would view memory as having the primary function of enabling us (the organism as a whole, that is, not the conscious self) to interpret the perceptual world and helping us to organize our responses to changes that place in the world.

Considerations or other aspects in the content of memory are those with which contain the capacity to remember: to (1) recall past experiences, and (2) retain knowledge that was acquired in the past. It would be a mistake to omit (1), for not any instance of remembering something is an instance of retaining knowledge. Suppose that as a young child you saw the Sky Dome in Toronto, but you did not know at the time which building it was. Later you learn what the Sky Dome is, and you remember having seen it when you were a child. This is an example of obtaining knowledge of a past fact - by recalling a past experience, but not an example of retaining knowledge because at the time you were seeing it you did not know you were since you did not know what the Sky Dome was or represented. Furthermore, it would be a mistake to omit (2), for not any instance of remembering something is an instance of recalling the past, let alone a past experience. For example, by remembering my telephone number, I retain knowledge of a past fact, and by remembering the date of the next elections, of a future fact.

According to Aristotle (De Memoria), memory cannot exist without imagery: We remember past experiences by recalling images that represent therm. This theory - the representative theory of memory - was also held by David Hume and Bertrand Russell (1921). It is subject to three objections, the first of which was recognized by Aristotle himself. That if what I remember is an image present to me now, how can it be that what I remember belongs to the past, how can it be that it is an image now present to my mind? According to the second objection, we cannot tell the difference between images that represent actual memories and those that are mere figments of the imagination. Hume suggested two criteria to distinguish between these two kinds of images, vivacity and orderliness, and Russell a third, an accompanying feeling of familiarity. Critics of the representative theory would argue that these criteria are not good enough, that they do not allow us to distinguish reliably between true memories and mere imagination. This objection is not decisive, as it only calls for a refinement of the proposed criteria. Nevertheless, the representative theory succumbs to the third objection, which is fatal: Remembering something does not require an image. In remembering their dates of birth, or telephone numbers, people do not, at least not normally, have an image of anything. In developing an account of memory, we must, therefore, proceed without making images an essential ingredient. One way of accomplishing this is to take the thing that is remembered to be a proposition, the content of which may be about the past, present, or future. Doing so would provide us with an answer to the problem pointed out by Aristotle. If the position we remember is a truth about the past, then we remember the past by virtue of having a cognation of something present - the proposition that is remembered.

What, then, are the necessary and sufficient conditions of remembering a proposition, of remembering that ‘p’? To begin with, believing that ‘p’ is not a necessary condition, for at a given moment ‘t’, I, may not be aware of the fact that I still remember that ‘p’ and thus, do not believe that ‘p’ at ‘t’. It is possible that I remember that ‘p’ but, perhaps because I gullibly trust another person’s judgement, unreasonably disbelieve that ‘p’. It will, however, be helpful to focus on the narrower question: Under which conditions is S’s belief that ‘p’ an instance of remembering that ‘p’? It is such an instance only if ‘S’ either (1) previously came to know that ‘p’, or (2) had an experience that put ‘S’ in a position subsequently to come to know that ‘p’. Call this the ‘original input condition’. Suppose, having learned in the past that 12 x 12 = 144 but subsequently having forgotten it. I now come to know again that 12 x 12 = 144 by using a pocket t calculator. Here the original input condition is fulfilled, but obviously this is not an example of remembering that 12 x 12 = 144. Thus, a further condition is necessary: For S’s belief that ‘p’ to be a case of remembering that ‘p’, the belief must be connected in the right way with the original input. Call this the ‘connection condition’. According to Carl Ginet (1988), the connection must be ‘epistemic’, at any time since the original input at which S acquires evidence sufficient for knowing that ‘p’, ‘S’ already knew that ‘p’. Critics would dispute that a purely epistemic account of the connection condition will suffice. They would insist that the connection be ‘causal’”: For ‘S’ to remember that ‘p’, there must be an uninterrupted causal chain connecting the original input with the present belief.

Not every case of remembering that ‘p’ is one of knowing that ‘p’, although I remember that ‘p’ I might not believe that ‘p’, and I might not be justified in believing that ‘p’, for I might have information that undermines or casts doubt on ‘p’. When, however, do we know something by remembering it? What are the necessary and sufficient conditions of knowing that ‘p’ on the basis of memory? Applying the traditional conception of knowledge, we may say that ‘S’ knows that ‘p’ on the basis of memory just in case (1) ‘S’ clearly and distinctly remembers that ‘p’: (2) ‘S’ believes that ‘p’ and (3) ‘S’ is justified in believing that ‘p’. (Since (1) entail ss that ‘p’ is true, adding a condition requiring p’s truth is not necessary.) Whether this account of memory knowledge is correct, and how it is to be fleshed out in detail, are questions which concern the nature of knowledge and epistemic justification in general, and thus, will give rise to much controversy.

Memory knowledge is possible only if memory is a source of justification. Common-sense assumes it is. We naturally believe that, unless there are specific reasons for doubt, we believe that we do remember that we seem to remember, unless it is undermined or even contradicted by our background beliefs. Thus, we trust that we have knowledge of the past, however, would argue that this trust is ill-founded. According to a famous argument by Bertrand Russell (1927), it is logically possible that the world sprang into existence five minutes ago, complete with our memories and evidence, since as fossils and petrified trees, suggesting a past of millions of years. If it is, then, there is no logical guarantee that we actually do remember what we seem to remember. Consequently, so the sceptics would argue, there is no reason to trust memory. Some philosophers have replied to this line of reasoning by trying to establish that memory is necessarily reliable, that it is logically impossible for the majority of our memory beliefs to be false. Alternatively, our commonsense view may be defended by pointing out that the unreasonable to trust memory - does not follow from its premise, memory fails to provide us with a guarantee that we seem to remember is true. For the argument to be valid, it would have to be supplemented with a further premise: For a belief to be justified, its justifying reason must guarantee its truth. Many contemporary epistemologists would dismiss this premise as unreasonably strict. One of the chief reasons for resisting it is that accepting it is harder more reasonable than our trust in particular, clear and vivid deliverance of memory. To the contrary, accepting these as true would actually appear less error prone than accepting an abstract philosophical principle which implies that our acceptance of such deliverance is justified.

These altering distinctions of forms of memory is a crude one, and seems uncategorized by the varying degrees of enabling such terms as ‘conscious’ and ‘explicit’ are so cloud-covered. Their shadowy implication, is well known, according to Schacter, McAndrews and Moscovitch, 1988, have in accordance with, the memory loss or amnesia is an inability to remember recent experiences (even from the very recent past) and to learn various but limited resultants amounts in types of information, and dilate upon features from selective brain damage that leaves perceptual, linguistic, and intellectual skills abounding with the overflowing emptiness of being and nothingness. Memory deficit misfunction have traditionally been studied using techniques designed to elicit explicit memories. So, for example, memory-loose persons in that these amnesic people, might be instructed or otherwise asked to think back to a learning episode and either recall information from that intermittent interval of their lives, or say whether a presented item had previously been encountered in the episodic period of learning. That being said, is that the very same persons who performed uncollectible afflicted in the loose of decayed or deadened or lifeless memory cells. The acquisition of skills is a case in point, and there is considerable experimental evidence showing the consensus of particular amnesic implications over a series of learning episodes. Although, a striking example is the densely amnesic unfortunates who learned how to use a personal computer over numerous sessions, despite declaring at the beginning of each session that he had never used a computer before. In addition to this sort of capacity to learn over a succession of episodes, amnesics have performed well on single-short-lived episodes (such as completing previously shown words given to phraselogic 3-letter cues). So just as these amnesic people clearly reveal the difference between conscious and nonconscious memory, but similar dissociations can be observed in normal subjects, as when performances on indirect tasks reveal the effects of prior events that are not remembered.

Basely, the memory, as that of enabling us to interpret the perceptual world and helping us to organize our responses to the challenges of change, that take place in the world. For both functions we have to accumulate experiences in a memory system in such a way as to enable the productive access of that experience at the appropriate times. The memory, then, can be seen as the repository of experience. Of course, beyond a certain age, we are able to use our memories in different ways, both to store information and to retrieve it. Language is vital in this respect and it might be argued that much of socialization and the whole of schooling are devoted to just such an extension of an evolutionary (relatively) straightforward system. It will follow that most of the operation of our memory system is preconscious. That is to say, consciousness only has access to the product of the memory processes and not to the processes themselves. The aspects of memory that we are conscious of can be seen as the final state in a complex and hidden set-class of operations.

How should we think about the structure of memory? The dominant metaphor is that of association. Words, ideas, and, emotions are seen as being linked together in an endless, shapeless, and formless entanglement. That is, the way our memory can appear to us if we attempt to reflect on it directly. However, it would be a mistake to dwell too much on the problems of consciousness and imagine that theory represent the inner sanctions of structure. For a cognitive psychologist interested in natural memory phenomena there were a number of reasons for bing deeply dissatisfied with theories based on associative set-classes with which are entangling nets. One ubiquitous class of memory failure seemed particularly troublesome. This is the experience of being able to recall a great deal of what we know able an individual other than their name. One such referent classification would entail, that ‘I know the face, but I just can’t place the name’, if someone else produced name we, may have, perhaps, been able to retrieve the rest of the information needed.

How might various theories of memory account for this phenomenon? First we can take an associative network approach, and the idealized associative network, concepts, such as the concept of a person, are represented as nodes, with associated nodes being connected through links. Generally speaking, the links define the nature of the relationship between nodes, e.g., the subject-predicate distinction. Suppose that the name of the person we are trying to recall is Bill Smith. We would have a Bill Smith node (or a node corresponding to Bill Smith) with all the available information concerning Bill Smith being linked to form some kind of propositional Smith’s name. Now, failure to retrieve Bill Smith’s name, while at the same time Bill Smith, would have to due to an inability to traverse the links to the Bill Smith node. However, this seems contradictory - content addressability. That is to say, given that any one constituent of a propositional representation can be accessed, the propositional node, and consequently all the other nodes link to it, should also be accessible. Thus, if we are able to recall where Bill Smith lives, where he works, whom he is married to, then, we should, in principle, be able to access the node representing his name. To account for the inability to do so, some sort of temporality ‘blocking’ of content addressability would seem to be needed. Alternatively, directionality of links would hae to be specified, though this would have to be done on a morally justified basis.

Next, we are to consider schema approaches. In that, schema models stipulate that there are abstract representations, i.e., schemata, in which all invariant information concerning any particular thing are represented. So that we would have a person schema for Bill Smith, that would contain all the invariant information about him. This would include his name, personality traits, attitudes, where he lived, whether he had a family, etc. It is not clear how one would deal with our example, least of mention, since some-one’s name is the quintessentially invariant property, then, given that it is known. It would have to be represented in the schema or out-line for that person. And, from our example, we knew that other invariant information, as well as variant, non-schematic information, e.g., the last talk he had given, were available for recall. This must be taken as evidence that the schema for Bill Smith was accessed. Why, then, were we unable to recall one particular piece of information that would have to be represented in the schema we clearly had access to? We would have to assume that within the person-schema or out-line for Bill Smith are sub-schema, one of which contained Bill Smith’s name, another containing the name of his wife, and so forth. We would further have to assume that access to the sub-schemata was independent and that, at the time in question, the one containing information about Bill Smith’s name was temporarily inaccessible. Unfortunately the concept of temporary inaccessibility is without precedent in schema theory and does not seem to be independently motivated.

Nonetheless, there are two other set-classes of memory problem that do not fit comfortably into the conventional frameworks. One is that of not being able to recall an event in spite of most detailed cues. This is commonly found when one partner is attempting to remind the other of a shared experience. Finally, we all have to experience of a memory being triggered spontaneously by something that was just an irrelevant part of the background for an event. Common triggers of such experiences are specific locales in town or country, scents and certain pieces of music.

What we learn from these kinds of events are that we need a model with which readily allows of their containing properties:

(1) Not all knowledge is directly retrievable;

(2) The central parts of an episode do not

necessarily cue recall of that episode;

(3) Peripheral cues, which are non-essential parts

of the contexts, can cue recall.

In response to these requirements, the frameworks of reference within which the model is couched is that of information processing. In trying to solve the problem, we first supposed, that memory consists of discrete units, or ‘records’, each containing information relevant to an ‘event’, an event being, for example, a person or a personal experience. Information contained in a record could take any number of forms, with no restrictions being placed on the way information is presented, on the amount being represented or on the number of records that could contain the same nominal information. Attached to each of these records would be some kind of access key. The function of this access key, is singular: It enables the retrieval of the record and nothing more. Only when the particular access key is used can the record, and the information contained therein, be retrieved. As with the record we felt that any type of information could be contained in the access key. However, two features would distinguish it from the record. First, the contents of the access key would be in a different form to that of the record, e.g., represented in a phonological or other central code. Second, the contents of the access key would not be retrievable.

The nature of the match required between the ‘description’ and a ‘head recordings’ will be a function of the type of information in the description. If the task is to find the definition of a word or information on a named individual then a precise match may be required at least for the verbal part of the description. We assume that the ‘head recordings’ are searched in parallel. On many occasions there will be more than one head recording that matches the description. However, we require that only one record be retrieved at a time. What is more, evidence in support of this assumption is summarized in Morton, Hammersley and Bekerian (1985). The data indicate that the more recent of two possibilities, in that records are retrieved. We conclude first that once a match is made the search process terminates and secondly, that the matching process is biassed in favour of the more recent of headings. There is, of course, no guarantee that the retrieved records will contain the information that is sought. The records my be incomplete or wrong. However, in such cases, or in the case that no record had been retrieved, there are two options: Either the search is continued or it is abandoned. If the search is to be continued then a new description will have to be formed, since searching again with the same description would result in the same outcome as before. Thus, there has to be a list of criteria upon which a new description can be based.

Retrieval depends on or upon a match between the description and the heading record. The relationship between the given cue and the description is open. It is clear that there needs to be a process of description formation which will pick out the most likely descriptors from the given cue. Clearly, for the search process to be rational the set of descriptors and the set-class of head recordings should overlap. The only reasonable state of affairs would be that the creation of head recordings and the creation of descriptions is the responsibility of the same mechanism.

By means of distributive normal forms, the Finnish philosopher Jaakko Hintikka (1929- ), wherein his early works ‘Distributive Normal Forms’ (1953) and ‘Forms and Content in Quantification Theory’ (in Two Papers on Symbolic Logic, 1955), Hintikka developed two logical theories which he has later applied to many different areas: The theory of distributive normal forms for quantification theory, and the theory of model sets which yields semantically motivated proof procedures for quantification theory and model logics.

Briefly, Hintikka has worked in a wide area, his work shows a great deal of conceptual and theoretical unity. This is partly due to the logical and semantical methods he uses, partly to the transcendental character (in the Kantian sense) of his philosophy. Hintikka has emphasized the role of rule-governed human activities in knowledge acquisition and in cognitive representation: His game-theoretical approaches to meaning is a case in point. The structures of such activities can be taken to provide the synthetic a priori features of our knowledge. In this respect Hintikka’s philosophy is Kantian in spirit.

However, since it cannot be the case of topic, in that all terms of a language are explicitly definable in that language - that would involve circularity - the most one could hope for from explicit definition between the distinction of theoretical and observational term s can sustain, at first glance, the prospects of finding explicit definitions for all theoretical terms appear inadequately depressed. Some theoretical terms - particularly those involved in functional identities - looks like the stenographical ‘product of mass and velocity’. But others are not explicitly definable, in the most fundamental scientific sense, is that, to define is to delimit. Thus, definitions serve to fix boundaries of phenomena or the range of applicability of terms or concepts. That whose range is to be delimited is called the ‘definiendum’, and that which delimits the ‘definiens’. Social science practices tend to focus on specifying application of concepts through ‘formal’ operational definitions. Philosophical discussions have concentrated almost exclusively on articulating ‘definitional forms’ for terms.

Definitions are ‘full’ if the ‘definiens’ completely delimits the ‘definiendum’, and ‘partly’ if it only brackets or circumscribes it. ‘Explicit definitions’ are full definitions where the ‘definintium’ and the ‘definiens’ are asserted to be equivalent. Theories or models which are so rich in structure that sub-portions are functionally equivalent to explicit definitions are said to provide ‘implicit definitions’. In formal context our basic understanding is provided by the ‘Beth definability theorem’, stating, not only provides fundamental understanding of explicit definition, but that relaxing conditions on explicit definition enable an understanding of Carnap’s notion of partial interpretation: whereas, explicit definitions fully specify (implicitly define) the referents of theoretical terms within intended models of the theory, creative partial definitions serve only to restrict the range of objects in intended models that could be the referents of theoretical terms.

As an individuation of theories, what determines whether theories T1 and T2 are instances of the same theory or distinct theories? By construing scientific theories as partially interpreted syntactical axioms systems TC, positivism made specific of the axiomatizations individuating features of the theory. Thus, different choices of axioms T or alternations in the correspondence rules - say, to accommodate a new measurement procedure - resulted in a new scientific theory. Positivist’s also held th at axioms and corresponding implicitly defined the meaning of the theory’s description terms τ. Thus significant alterations in the axiomatizations would result not only in a new theory T’C’ but one with changed meanings τ’. Kuhn and Feyerabend maintained that the resulting changes could make TC and T’C’ non-comparable, or ‘incommensurable’. Attempts to explore individuation issues for theories via meaning change or ‘incommensurability’, proved unsuccessful and have been largely abandoned.

Feyerabend’s differences with Kuhn are that first, that Feyerabend’s variety of incommensurability is more global, and cannot be localized in the vicinity of a single problematic term or even a cluster of terms, that Feyerabend holds that fundamental changes of theory lead to changes in the meaning of all the terms in a particular theory. The other significant difference concerns the reason for incommensurability. Whereas, Kuhn thinks that incommensurability stems from specific transactional difficulties involving problematic terms, Feyerabend’s variety of incommensurability seems to result from a kind of extreme holism about the nature of meaning itself.

One significant point of agreement between Kuhn and Feyerabend is that neither thinks that incommensurability is incomparability, in that, both countenance, and indeed recommend, alternative modes of comparison. Feyerabend says, that ‘the use of incommensurable theories for the purpose of criticism must be based on methods which do not depend on the comparison of statements with identical constituents, such methods are readily available’. But, although, he mentions a number of methods, he does not explicate them in full. For example, he says that theories can be compared using the ‘pragmatic theory of observation’, according to which you attend to causes of the production of a certain observational sentence, than the meaning of that sentence. Further, he argues that we do not compare ,meanings: We investigate the conditions under which a structural similarity can be obtained’. Insisting that ‘there may be empirical evidence against [theory], and for another theory without any need for similarity of meaning. On a more sarcastic, though revealing, Feyerabend states: ‘Of course, some kind of comparison is always possible (for example, one physical theory may sound more melodious when read aloud to the accompaniment of a guitar than another physical theory). At any rate, he insists that ‘it is possible to use incommensurability theories for the purpose of mutual criticism,’ adding that this removes ‘one of the main ‘paradoxes’ of the approach,’ that he suggests: And finally, and uses the same analogy that Kuhn uses to explain a scientist’s ability to learn a new theory, that of a child learning a new language. Rather than translating between languages, ‘We can learn a language or a culture from scratch, as a child learns them, without detour through our native tongue’.

Nevertheless, it is commonly supposed that definitions are analytic specifications of meaning. In some cases, such as simulative definitions, this may be so, however, some philosophers allow specifications of meaning to be synthetic. Reduction sentences are often descriptions of measurement apparatus specifying empirical correlations between detector output readings and values for parameters. These are synthetic and are rarely mere specifications of meaning. The larger point is that specification of meaning is only one of many possible means for delimiting the ‘definiendum’. Specification of meaning seems tangential to the bulk of scientific definitional practice.

Definitions are said to be ‘creative’ if their addition to a theory expands its content, and non-creative, if they do not. More general, we can say that definitions are creative whenever the ‘definiens’ asserts contingent relations involving the ‘definiendum’. Thus definitions providing analytic specifications of meaning are non-creative. Most explicit definitions are non-creative, and hence, ‘eliminable’ from theories without loss of empirical content. One could realize the distinction so that definitions redundant of accepted theory or background belief in the scientific context are counted as non-creative. Either way, most other scientific expressions of empirical correlation. Thus, for purposes of philosophical analysis, suppositions that definitions are either non-creative or meaning specifications demand explicit justification. Much of the literature concerning incommensurability and meaning change in science turns on uncritical acceptances of such suppositions.

The issue of incommensurability remains a live one. It does not arise just for a logical empiricist account of scientific theories, but for any account that allows for the linguistic representation of theories. Discussions on linguistic meaning cannot be banished from the philosophical analysis itself, and its place is not about to be taken prominently in the daily work of science itself, and its place is not about to be taken over by any other representational medium. Therefore, the challenge facing anyone who holds that the scientific enterprise sometimes requires us to make a point-by-point linguistic comparison of rival theories is to respond to the specific semantic problems raised by Kuhn and Feyerabend. However, then the challenge is to articulate another way of putting scientific theories in the balance and weighing them against one another.

Such confusion abound in scientific and philosophical discourse of both ‘operational’ and ‘definitions’. The notion, first introduced by P.W. Bridgman (1938) with reference to non-creative explicit full definitions specifying meaning in terms of operation as preformed in the measurement process. Behaviourist social scientist’s expanded the notion to include creative partial definitions, and in practice most operational definitions can be cast as synthetic creative reduction sentences specifying empirical relations between measurement procedures and intervening variables or hypothetical or subject to social scientists respond that it’s just a matter of quibbling over semantics - a response appropriate to Bridgman’s sort of operational definitions but not to their own.

Many philosophers have been concerned with admissible ‘definitional forms’. Some require ‘real definitions’ - a form of explicit definition in which the ‘definiens’ equate the ‘definiendum’ with an essence specified as a conjunction Α1 ∧ . . . ∧ Αn of attributes. By contrast, ‘nomianl definitions’ use nonessential attributes. The ‘Aristotelian definitional form’ further requires that real definitions be hierarchical, where the species of a genus share Α1, . . . Αn-1, being differentiated only by the remaining essential attribute Αn. Such definitional forms are inadequate for evolving biological species whose essence may vary. ‘Disjunctive polytypic definitions’ allow changing essences by equating the ‘definiendum’ with a finite number of conjunctive essence. But future evolution may produce further new essences, so partially specified ‘potentially infinite disjunctive polytypic definitions’ were proposed. Such ‘explicit definitions’ fail to delimit the species, since they are incomplete. A superior alternative is to formulate reduction sentences for each reduction sentences for subsequently evolved essences.

Wittgenstein (1953) claimed that many natural kinds lack conjunctive essences: Rather, their members stand only in a family resemblance to each other, however, an important extension to the original theory of natural kinds provided by Putnam (1926- ) and Kripke (1940- ). These philosophers presented their account as applying to natural kind terms in ordinary language, than to terms of theoretical science. Typical examples are ‘water’, ‘gold’, and ‘lemon’. They claimed, on the basis of intuitive hypothetical cases, that the intention to refer to a natural kind determined by the possibly unknown real essence in part of a correct account of the normal use of these terms in ordinary language. If this is right, then it is certainly reasonable to extend the account to the technical uses of theoretical terms in science. If the account cannot be sustained for the case of ordinary language kind terms, appeal to it as an account of scientific terms will be more problematic.



We then have a conception of theory as essentially an embodiment of analogies, both formal and material, which describe regularities among the data of a given domain (models of data and phenomenal laws), with unalogies between these and models of data in other domains, and so on in a hierarchy of levels of a unifying theoretical system. The ‘meaning of theoretical terms’ is given by analogies with familiar natural processes (e.g., mechanical systems), or by hypothetical models (e.g., Bohr’s planetary atom). In either case, descriptive terms of the analogues are derived metaphorically.

Inn evaluating the Kripke-Putnam theory, the crucial point to note for present purposes is that it rests on a strong ontological presupposition: Contrary to Locke, a large class of ordinary language kind terms must actually pick out (more or less) the requisite sort of natural kind. (Unless, at any rate, the theory of natural kind is based on massive metaphysical delusion.) And, in addition, we must have some sense, in advance of scientific illumination of the real essence of a kind, whether that kind is a natural kind. Putnam claims explicitly, for example, that the stuff on Twin Earth would not have been water even if the explorers from Earth had arrived before any means had been discovered for distinguishing H2O from XYZ. The intention to refer to the real nature preexists the characterization of that nature.

Suppe (1918) urged that natural kinds were constituted by a single kind-making attribute (e.g., being gold), and that which patterns of correlation might obtain between the kind-making attribute and other diagnostic characteristics is a factual matter. Thus issues of appropriate definitional form (e.g., explicit, polytypic, or cluster) are empirical, not philosophical questions.

Definitions of concepts are closely related to explications, where imprecise concepts (explicanda) are replaced by more precise ones (explicata). The explicandum and explicatum are never equivalent. In an adequate explication the explicatum will accommodate all clear-cut instances of the explicandum and exclude all clear-cut non-instances. The explicatum decides what to do with cases where application of the explicandum is problematic. Explications are neither real nor normative definitions and are generally creative. In many scientific cases, definitions function more as explications than as meaning specifications or real definitions.

What is still, to motive considerations throughout the prehensibility in study, has an existence or a place consist with an exact definitiveness for having distinct or certain limits, however, in later developments of Kuhn’s view, less emphasis is placed on what might be call ‘evaluative incommensurability’, and more of ‘linguistic incommensurability’. By 1983, Kuhn appeared to have moved away from evaluative incommensurability entirely, by saying that speaking of differences in ‘methods’ he states, that this version of the is the same as the ‘origin version’ of the incommensurability thesis, which he characterized as.” ‘The claim that two theories are incommensurable is then the claim that there is no language, neutral or otherwise, into which both theories, conceived as sets of sentences, can be translated without residue or loss. Therefore, incommensurability equals untranslatability, what is it about scientific paradigms that preclude translation into a single common language, so that their claims can be set side by side and their points of agreement and disagreement isolated?

Meanwhile, it is, nonetheless, characteristic of intuition and Russell’s philosophy by ‘acquaintance’, that, at least of the philosophers who objected to direct realism, which states that certain familiar facts about illusion disprove the theory of perception, however, many versions of the argument which must be distinguished carefully, as some of these distinctions centre on the content of the premises (the kind of the appeal to illusion): Others centre on the interpretation of the conclusion.

A scratchy or crude statement of direct realism might cause to be heard as in perception, we sometimes directly perceive physical objects and their properties: We do not always perceive physical objects by perceiving something else, e.g., a sense-data, with which is given by the senses. There are, however, difficulties with this formulation of the view. For one thing a great many philosophers who are not direct realists would admit that it is a mistake to describe people as actually perceiving something other than a physical object. In particular, such philosophers might admit, we should never say that we perceive sense-data. To reveal information that way would be to suppose that we should model our understanding of our relationship to sense-data on our understanding of the ordinary use of perceptual verbs as they describe our relation to the physical world, and that is the last thing paradigm sense-datum theories should want. At least many of the philosophers who objected to direct realism would prefer to express what they were objecting to in terms of a technical (and, philosophically controversial) concept such as ‘acquaintance’. Using such a notion we could define direct realism this way: In veridical experience we are directly acquainted with parts, e.g., as, surfaces or constituents of physical objects. A less cautious version of the view might drop the reference to veridical experience and claim simply that in all experience we are directly acquainted with parts or constituents of physical objects.

Complex expressions, about the possibility of such a frame of reference we need to bear in mind that within the limited sense that their can be of either ‘reporting’ or ‘instituting’ equivalences among verbal or symbolic expressions, in form, definitions and either explicit or implicit.

A definition that institutive explains how an expression will be used henceforth. A definition that reports or gives to account of how an expression has been used, as an explicit definition explains, by means of words given in use, how an expression given in mention has been or will be used, for instance, The words ‘the cat’ in mention, ‘The cat’ is on the mat: The words ‘the cat’ in use: The cat is on the mat. An explicit definition explains how an expression has been or will be used by using it, and, usually in conjunction with the use of other expressions.

Dictionary definitions are reportive and explicit. Symbols introduced in technical writings are usually institutive and explicit, ads when a word is leaned in the context of its use, that context,. In effect, provides a reportive, implicit definition. Formal axiomatic systems, in which the meaning of each expression is gathered from its formal-logical relationship with the other expressions, provide institutive, implicit definitions.

This same course of thought seemingly makes Plato suggest that it is possible to have knowledge only about Forms, and that knowledge about sensible objects is impossible. Moreover, he also seems to hold sometimes that we cannot have about Forms that kind of cognition, belief or opinion, that we do have of sensibles. Yet, he allows that it is possible to make mistakes about Forms, and also to be a cognitive state as assembled of Form, that seems indistinguishable from what he seems obliged to call false belief or opinion. This idea, too, requires some kind of further explanation of the distinction between Forms and sensibles - a requirement that Plato seems to see some difficulty in satisfying.

Although in this phase of his work Plato concentrates on constructing a metaphysics that will make room for the possibility of knowledge, he does, however, at the same time pay some attention to the problems that are characteristic of the first phase of his epistemology. In the ‘Meno’, the ‘Phaedo’ and the ‘Republic’, he develops what has been called the ‘method of hypothesis’, which seems to be demonstrated unconditionally. In the ‘Meno’ and the ‘Phaedo’, he indicates that hypotheses are to be accepted only provisionally and not regarded as certain or unrevisable. In the ‘Republic’, however, he seems to maintain that one an somehow reach an ‘unhypothesized’ principle which will somehow serve as the basis for demonstrating everything hitherto accepted merely hypothetically. He apparently implies that what is demonstrated thereby will have to do only with Forms. He also makes a suggestion, not clearly explained, that this ‘principle’ has something to do with the Form of the Good. There is no general accepted interpretation of what Plato says, however, it seems to indicate that he accepted or was seriously considering some kind of ‘foundationalist epistemology’ position, which would start from some unshakable principle and derive from it the rest of what there is to be known about Forms. (As often, however, Plato seems to waver between thinking of the principle and what is to be derived as possessing propositional structure and treating them as non-propositionally structured objects.)

This method of hypothesis is earlier offered as something that is used by ‘dialectic’, the style of philosophizing that takes place though conversational questions and answers. This justifies introducing a more sophisticated concept to account for the domain, on which the analysis is repeated. The result of dialectic analysis is an integrated network of concepts which specified the proper domain of each and which preserves the legitimate content of earlier concepts in the final, most comprehensive and adequate concept.



It was relational conceptual representation that most interested Kant and of relational concepts, he thought the concept of cause-and-effect to be by far the most important. Kant wanted to show that natural science (which for him meant primarily physics) was genuine knowledge (he thought that Hume's sceptical treatment of cause and effect relations challenged this status). He believed that if he could prove that we must tie items in our experience together causally if we are to have a unified awareness of them, he would have put physics back on "the secure path of a science.” The details of his argument have exercised philosophers for more than two hundred years. We will not go into them here, but the argument illustrates how central the notion of the unity of consciousness was in Kant's thinking about the mind and its relation to the world.

Consciousness may possibly be the most challenging and pervasive source of problems in the whole of philosophy. Our own consciousness is the most basic fact confronting us, yet it is almost impossible to say what consciousness is. Is mine like yours? Is ours like that of animals? Might machines come to have consciousness? Is it contingently possible for there to be disembodied consciousness? Whatever complex biological and neural processes go backstage, it is my consciousness that provides the theatre where my experiences and thoughts have their existence, where my desires are felt and where my intentions are formed. But then how am I to conceive the “I,” or self that is the spectator of this theatre? A difficulty in thinking about consciousness is that the problems seem not to be scientific ones: Leibniz remarked that if we could construct a machine that could think and feel, and blow it up to the size of a mill and thus be able to examine its working parts as thoroughly as we pleased, we would still not find consciousness and draw the conclusion that consciousness resides in simple subjects, not complex ones. Eve n if we are convinced that consciousness somehow emerges from the complexity of brain functioning, we many still feel baffled about the way the emergence takes place, or why it takes place in just the way it does.

The nature of the conscious experience has been the largest single obstacle to physicalism, behaviourism, and functionalism in the philosophy of mind: These are all views that according to their opponents, can only be believed by feigning permanent anaesthesin. But many philosophers are convinced that we can divide and conquer: We may make progress by breaking the subject into different skills and recognizing that rather than a single self or observer we would do better to think of a relatively undirected whirl of cerebral activity, with no inner theatre, no inner lights, ad above all no inner spectator.

A fundamental philosophical topic both for its central place in any theory of knowledge, and its central place in any theory of consciousness. Philosophy in this area is constrained by several of properties that we believe to hold of perception. (1) It gives us knowledge of the world around us (2) We are conscious of that world by being aware of “sensible qualities,” colours, sounds, tastes, smells, felt warmth, and the shapes and positions of objects in the environment. (3) Such consciousness is affected through highly complex information channels, such as the output of three different types of colour-sensitive cells in the eye, or the channels in the ear for interpreting pulses of air pressure as frequencies of sound. (4) There ensues even more neurophysiological coding of that information, and eventually higher-order brain functions bring it about that we interpreted the information so received (much of this complexity has been revealed by the difficulty of writing programs enabling commuters to recognize quite simple aspects of the visual scene.) The problem is to avoid thinking of there being a central, ghostly, conscious self. Fed information in the same way that a screen is fed information by a remote television camera. Once such a model is in place, experience will seem like a model getting between us and the world, and the direct objects of perception will seem private items in an inner theatre or sensorium. The difficulty of avoiding this model is especially acuter when we consider the secondary qualities of colour, sound, tactile feelings, and taste, which can easily seem to have a purely private existence inside the perceiver, like sensations of pain. Calling such supposed items names like sense data or percepts exacerbate the tendency. For sense data refers to the immediate objects of perceptual awareness, such as colour patches and shapes, usually supposed distinct from surfaces of physical objects. Their perception is more immediate, and because sense data are private and cannot appear other than they are, they are objects that change in our perceptual fields when conditions of perception change. Physical objects remain constant. Even so, just because physical objects can appear other than they are, there must be private, mental objects that have all of the qualities the physical objects appear to have. Nevertheless, perception gives us knowledge or the inner world around us, is quickly threatened, for there now seem little connection between these items in immediate experience and any independent reality. Reactions to this problem include scepticism and idealism.

A more hopeful approach is to claim that complexities of (3) and (4) explain how we can have direct acquaintances of the world, than suggesting that the acquaintance we do have at best an amendable indiction. It is pointed out that perceptions are not like sensations, precisely because they have a content, or outer-directed nature. To have a perception is to be aware of the world as bing such-and-such a way, than to enjoy a mere modification of sensation. Nut. Such direct realism has to be sustained in the face of the evidently personal (neurophysiological and other) factors determining how we perceive. One approach is to ask why it is useful to be conscious of what we perceive, when other aspects of our functioning work with information determining responses without any conscious awareness or intervention. A solution to this problem would offer the hope of making consciousness part of the natural world, than strange optional extra.

Even to be, that if one is without idea, one is without concept, and, in the same likeness that, if one is without concept he too is without idea. Idea (Gk., visible form) that may be a notion as if by stretching all the way from one pole, where it denotes a subjective, internal presence in the mind, somehow though t of as representing something about the world, to the other pole, where it represents an eternal, timeless unchanging form or concept: The concept of the number series or of justice, for example, thought of as independent objects of enquiry and perhaps of knowledge. These two poles are not distinct in meaning by the term kept, although they cause many problems of interpretation, but between them they define a space of philosophical problems. On the one hand, ideas are that with which we think. Or in Locke’s terms, whatever the mind may ne employed about in thinking Looked at that way they are inherently transient, fleeting, and unstable private presence. On the other, ideas provide the way in which objective knowledge can ne expressed. They are the essential components of understanding and any intelligible proposition that is true could be understood. Plato’s theory of “Form” is a celebration of the objective and timeless existence of ideas as concepts, and in this hand ideas are reified to the point where they make up the only real world, of separate and perfect models of which the empirical world is only a poor cousin, this doctrine, notably in the Timarus opened the way for the Neoplatonic notion of ideas as the thoughts of God. The concept gradually lost this other-worldly aspect, until after Descartes ideas become assimilated to whatever it is that lies in the mind of any thinking being.

Together with a general bias toward the sensory, so that what lies in the mind may be thought of as something like images, and a belief that thinking is well explained as the manipulation of images, this was developed by Locke, Berkeley, and Hume into a full-scale view of the understanding as the domain of images, although they were all aware of anomalies that were later regarded as fatal to this doctrine. The defects in the account were exposed by Kant, who realized that the understanding needs to be thought of more through rules and organized principles than of any kind of copy of what is given in experience. Kant also recognized the danger of the opposite extreme (that of Leibniz) of failing to connect the elements of understanding with those of experience at all (Critique of Pure Reason).

It has become more common to think of ideas, or concepts as dependent upon social and especially linguistic structures, than the self-standing creatures of an individual mind, but the tension between the objective and the subjective aspects of the matter lingers on, for instance in debates about the possibility of objective knowledge, of indeterminacy in translation, and of identity between thoughts people entertain at one time and those that they entertain at another.

To possess a concept is able to deploy a term expressing it in making judgements: The ability connects with such things as recognizing when the term applies, and being able to understand the consequences of its application. The term “idea” was formerly used in the same way, but is avoided because of its association with subjective mental imagery, which may be irrelevant to the possession of concept. In the semantics of Frége, a concept is the reference of a predicate, and cannot be referred to by a subject term. Frége regarded predicates as incomplete expressions for a function, such as, sine . . . or log . . . is incomplete. Predicates refer to concepts, which they are “unsaturated,” and cannot be referred to by subject expressions (we thus get the paradox that the concept of a horse is not a concept). Although Frége recognized the metaphorical nature of the notion of a concept being unsaturated, he was rightly convinced that some such notion is needed to explain the unity of a sentence, and to prevent sentences from being thought of as mere lists of names.

Mental states have contents: A belief may have the content that I will catch the train, a hope may have the content that the prime minister will resign. A concept is something that is capable of being a constituent of such contents. More specifically, a concept is a way of thinking of something - a particular object, or property, or relation. Or another entity.

Several different concepts may each be ways of thinking of the same object. A person may think of himself in the first-person way, or think of himself as the spouse of May Smith, or as the person located in a certain room now. More generally, a concept ‘c’ is such-and-such without believing ‘d’ is such-and-such. As words can be combined to form structured sentences, concepts have also been conceived as combinable into structured complex contents. When these complex contents are expressed in English by “that . . . “ clauses, as in our opening examples, they will be capable of been true or false, depending on the way the world is.

Concepts are to be distinguished from stereotypes and from conceptions. The stereotypical spy may be a middle-level official down on his luck and in need of money, none the less, we can come to learn that Anthony Blunt, are historian and Surveyor of the Queen’s Picture, is a spy: We can come to believe that something falls under a concept while positively disbelieving that the same thing falls under the stereotype association with the concept. Similarly, a person’s conception of a just arrangement for resolving disputes may involve something like contemporary Western legal systems. But whether or not it would be correct, it is quite intelligible for someone to reject this conception by arguing that it does not adequately provide for the elements of fairness and respect that are required by the concept of justice.

A theory of a particular concept must be distinguished from a theory of the object or objects it picks out. The theory of the concept is part of the theory of thought and epistemology: A theory of the object or objects is part of metaphysics and ontology. Some figures in the history of philosophy - and perhaps even some of our contemporaries - are open to the accusation of not having fully respected the distinction between the two kinds of theory. Descartes appears to have moved from facts about the indubitability of the thought “I think,” containing the first-person way of thinking, to conclusions about the non-material nature of the object he himself was. But though the goals of a theory of concepts theory is required to have an adequate account to its relation to the other theory. A theory of concepts is unacceptable if it gives no account of how the concept is capable of picking out the object it evidently does pick out. A theory of objects is unacceptable if it makes it impossible to understand how we could have concepts of those objects.

A fundamental question for philosophy is: What individuates a given concept - that is, what makes it the one is, than any other concept? One answer, which has been developed in great detail, is that it is impossible to give a non-trivial answer to this question. An alternative deals with the question by stating from the ideas that a concept is individuated by the condition that must be satisfied if a thinker is to posses that concept and to be capable of having beliefs and other attitudes whose contents contain it as a constituent. So to take a simple case, on e could propose the logical concept “and” is individuated by this conditions: It is the unique concept “C” to possess which a thinker has to find these forms of inference compelling, without basing them on any further inference or information: From any to premisses “A” and “B,” “ABC” can be inferred: And from any premiss “ABC,” to each one of the “A” and “B” can be inferred. Again, a relatively observational concept such as “round” can be individuated in part by stating that the thinker find specified contents containing it compelling when he has certain kinds of perception, and in part by relating those judgements containing the concept and which are based on perception that individuates a concept by saying what is required for a thinker to possess it can be described as giving the possession condition for the concept.

A possession condition for a particular concept may actually make use of that concept. The possession condition for “and” does not. We can also expect to use relatively observational concepts in specifying the kind of experiences that have to be of comment in the possession condition for relatively observational concepts. We must avoid, as mentioned of the concept in question as such, within the content of the attitudes attributed to the thinker in the possession condition. Otherwise we would be presupposing possession of the concept in an account that was meant to elucidate its possession, in talking of what the thinker finds compelling, the possession conditions can also respect an insight of the later Wittgenstein: That a thinker’s mastery of a concept is inextricably tied to how he finds it natural to go on in new cases in applying the concept.

Sometimes a family of concepts has this property: It is not possible to master any one of the members of the family without mastering the other. Two of the families that plausibly have this status are these: The family consisting of some simple concepts 0, 1, 2, . . . of the natural numbers and the corresponding concepts of numerical quantifiers there are 0 so-and-sos, there is 1 so-and-so, . . . : And the family consisting of the concepts “belief” ad “desire.” Such families have come to be known as “local holism.” A local Holism does not prevent the individuation of a concept by its possession condition. Rather, it demands that all the concepts in the family be individuated simultaneously. So one would say something of this form: Belief and desire form the unique pair of concepts C1 and C2 such that for a thinker to possess them is to meet such-and-such condition involving the thinker, C1 and C2. For these and other possession conditions to individuate properly, it is necessary that there be some ranking of the concepts treated, and the possession conditions for concepts higher in ranking must presuppose only possession of concepts at the same or lower level in the ranking.

A possession condition may in various ways make a thinker’s possession of some particular concept dependents upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinker’s perceptual experience. Perceptual experience represents the world for being a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience e to the subject’s environment. That is, much greater of the experiences in a possession condition will make possession of that concept dependent in particular upon the environmental relations of the thinker. Also, from intuitive particularities, that evens though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary if the thinker’s social environment is varied. A possession condition that properly individuates such a concept must take into account the thinker’s social relations, in particular his linguistic relations.

Concepts have a normative dimension, a fact strongly emphasized by Kripke. For any judgement whose content involves s a given concept, there is a correctness condition for that judgement, a condition that is dependent in part upon the identity of the concept. The normative character of concepts also extends into the territory of a thinker’s reason for making judgements. A thinker’s visual perception can give him good reason for judging “That man is bald”: It does not by itself give him good reason for judging “Rostropovich is bald,” even if the man he sees is Rostropovich. All these normative connections must be explained by a theory of concepts. One approach to these matters is to look to the possession condition for a concept, and consider how the referent of the concept is fixed from it, together with the world. One proposal is that the referent if the concept is that object (or property, or function, . . . ) which makes the practices of judgement and inference in the possession condition always lead to true judgements and truth-preserving inferences. This proposal would explain why certain reasons are necessarily good reasons for judging given contents. Provided the possession condition permit s us to say what it is about a thinker’s previous judgements that make it the case that he is employing one concept rather than another, this proposal would also have another virtue. It would allow us to say how the correctness condition is determined for a newly encountered object. The judgement is correct if t he new object has the property that in fact makes the judgmental practices in the possession condition yield true judgements, or truth-preserving inferences.

Despite the fact that the unity of consciousness had been at the centre of pre-20th century research on the mind, early in the 20th century the notion almost disappeared. Logical atomism in philosophy and behaviourism in psychology were both unsympathetic to the notion. Logical atomism focussed on the atomic elements of cognition (sense data, simple propositional judgments, etc.), rather than on how these elements are tied together to form a mind. Behaviourism urged that we focus on behaviour, the mind being alternatively myth or something otherwise that we cannot and do not need of studying the mysteriousness of science, from which brings meaning and purpose to humanity. This attitude extended to consciousness, of course. The philosopher Daniel Dennett summarizes the attitude prevalent at the time this way: Consciousness may be the last bastion of occult properties, epiphenomena, immeasurable subjective states - in short, the one area of mind best left to the philosophers. Let them make fools of themselves trying to corral the quicksilver of ‘phenomenology’ into a respectable theory.

The unity of consciousness next became an object of serious attention in analytic philosophy only as late as the 1960s. In the years since, new work has appeared regularly. The accumulated literature is still not massive but the unity of consciousness has again become an object of serious study. Before we examine the more recent work, we need to explicate the notion in more detail than we have done so far and introduce some empirical findings. Both are required to understand recent work on the issue.

To expand on our earlier notion of the unity of consciousness, we need to introduce a pair of distinctions. Current works on consciousness labours under a huge, confusing terminology. Different theorists exchange dialogue over the excess consciousness, phenomenal consciousness, self-consciousness, simple consciousness, creature consciousness, states consciousness, monitoring consciousness, awareness as equated with consciousness, awareness distinguished from consciousness, higher orders thought, higher orders experience, qualia, the felt qualities of representations, consciousness as displaced perception, . . . and on and on and on. We can ignore most of this profusion but we do need two distinctions: between consciousness of objects and consciousness of our representations of objects, and between consciousness of representations and consciousness of self.

It is very natural to think of self-consciousness or, cognitive state more accurately, as a set of cognitive states. Self-knowledge is an example of such a cognitive state. There are plenty of things that I know bout self. I know the sort of thing I am: a human being, a warm-blooded rational animal with two legs. I know of many properties and much of what is happening to me, at both physical and mental levels. I also know things about my past, things I have done and that of whom I have been with other people I have met. But I have many self-conscious cognitive states that are not instances of knowledge. For example, I have the capacity to plan for the future - to weigh up possible courses of action in the light of goals, desires, and ambitions. I am capable of ca certain type of moral reflection, tide to moral self-and understanding and moral self-evaluation. I can pursue questions like, what sort of person I am? Am I the sort of person I want to be? Am I the sort of individual that I ought to be? This is my ability to think about myself. Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employing in my thought about other people and other objects.

When I say that I am a self-conscious creature, I am saying that I can do all these things. But what do they have in common? Could I lack some and still be self-conscious? These are central questions that take us to the heart of many issues in metaphysics, the philosophy of mind, and the philosophy of psychology.

Even so, with the range of putatively self-conscious cognitive states, one might naturally assume that there is a single ability that all presuppose. This is my ability to think about myself. I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for autobiographical memories and moral self-understanding.

The proposing account would be on par with other noted examples of the deflationary account of self-consciousness. If, in at all, a straightforward explanation to what makes those of the “self contents” immune to error through misidentification concerning the semantics of self, then it seems fair to say that the problem of self-consciousness has been dissolved, at least as much as solved.

This proposed account would be on a par with other noted examples as such as the redundancy theory of truth. That is to say, the redundancy theory or the deflationary view of truth claims that the predicate ‘. . . . true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophic enquiry. The approach admits of different versions, but centres on the pints (1) that ‘it is true that p’ says no more nor less than ‘p’ (so, redundancy”) (2) that in less direct context, such as ‘everything he said was true’, or ‘all logical consequences of true propositions as true’, the predicated functions as a device enabling us to generalize rather than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from true propositions. For example, its translation is to infer that: (∀p, Q)(P & p ➞ q ➞ q)’ where there is no use of a notion of truth.

There are technical problems in interpreting all uses of the notion of truth in such ways, but they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as . . . ‘science aims at the truth’ or ‘truth is a norm governing discourse. Indeed, postmodernist writing frequently advocates that we must abandon such norms, along with a discredited ‘objective’ concept ion of truth. But perhaps, we can have the norms even when objectivity is problematic, since they can be framed within mention of truth: Science wants to be so that whenever science holds that ‘p’, when ‘p’‘. Discourse is to be regulated by the principle that it is wrong to assert ‘p’. When not-p.

It is important to stress how redundancy or the deflationary theory of self-consciousness, and any theory of consciousness that accords a serious role in self-consciousness to mastery of the semantics of the first-person pronoun, is motivated by an important principle that ha governed much of the development of analytical philosophy. This is the principle that the philosophical analysis of thought can only proceed through the philosophical analysis of language:

Thoughts differ from all-else, that are expressed in and among the contents of the mind in being wholly communicable: It is of the essence of thought that I can convey to you the very thought that I have, as opposed to being able to tell you merely something about what my thought is like. It is of the essence of thought not merely to be communicable, but to be communicable, without residue, by means of language. In order to understand thought, it is necessary, therefore, to understand the means by which thought is expressed. We communicate thought by means of language because we have an implicit understanding of the workings of language, that is, of the principles governing the use of language, it is these principles, which relate to what is open to view in the employment of language, unaided by any supposed contact between mind and the senses that they carry. In order to analyses thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp. (Dummett, 1978)

So how can such thoughts be entertained by a thinker incapable of reflexively referring to himself as English speakers do with the first-person pronoun be plausibly ascribed thought with first-person contents? The thought that, despite all this, there are in fact first-person contents that do not presuppose mastery of the first-person pronoun is at the core of the functionalist theory of self-reference and first-person belief.

The best developed functionalist theory of self-reference has been deployed by Hugh Mellor (1988-1989). The basic phenomenon he is interested in explaining is what it is for a creature to have what he terms as subjective belief, which is to say, a belief whose content is naturally expressed by a sentence in the first-person singular and the present tense. Mellor starts from the functionalist premise that beliefs are causal functions from desires to actions. It is, of course, the emphasis on causal links between belief and action that make it plausible to think that belief might be independent of language and conscious belief, since “agency entails neither linguistic ability nor conscious belief. The idea that beliefs are causal functions from desires to actions can be deployed to explain the content of a give n belief through which the equation of truth conditions and utility conditions, where utility conditions are those in which the actions caused by the conjunction of that belief with a single desire result in the satisfaction of that desire. To expound forthwith, consider a creature ‘x’ who is hungry and has a desire for food at time ‘t’. That creature has a token belief b/(p) that conjoins with its desire for food to cause it to eat what is in front of it at that time. The utility condition of that belief is that there is food in front of it at that time. The utility condition of that belief is that there is food in from it of ‘x’ at that time. Moreover, for b/(p) to cause ‘x’ to eat what is in front of it at ‘t’, b/(p) must be a belief that ‘x’ has at ‘t’. Therefore, the utility/truth condition of b/(p) is that whatever creatures have this belief faces food when it is in fact facing food. And a belief with this content is, of course, the subjective belief whose natural linguistic expression would be “I am facing food now.” On the other hand, however, a belief that would naturally be expressed wit these words can be ascribed to a non-linguistic creature, because what makes it the belief that it is depending not on whether it can be linguistically expressed but on how it affects behaviour.

For in order to believe ‘p’, I need only be disposed to eat what I face if I feel hungry: A disposition which causal contiguity ensures that only my simultaneous hunger can provide, and only into making me eat, and only then. That’s what makes my belief refers to me and to when I have it. And that’s why I need have no idea who I am or what the time is, no concept of the self or of the present, no implicit or explicit grasp of any “sense” of “I” or “now,” to fix the reference of my subjective belies: Causal contiguity fixes them for me.

Causal contiguities, according to explanation may well be to why no internal representation of the self is required, even at what other philosophers have called the sub-personal level. Mellor believes that reference to distal objects can take place when in internal state serves as a causal surrogate for the distal object, and hence as an internal representation of that object. No such causal surrogate, and hence no such internal representation, is required in the case of subjective beliefs. The relevant casual components of subjective beliefs are the believer and the time.

The necessary contiguity of cause and effect is also the key to =the functionalist account of a self-reference in conscious subjective belief. Mellor adopts a relational theory of consciousness, equating conscious beliefs with second-order beliefs to the effect that one is having a particular first-order subjective belief, it is, simply a fact about our cognitive constitution that these second-order beliefs are reliably, though of course fallibly, generated so that we tend to believe that we believe things that we do in fact believe.

The contiguity law in Leibniz, extends the principles that there are no discontinuous changes in nature, “natura non facit saltum,” nature makes no leaps. Leibniz was able to use the principle to criticize the mechanical system of Descartes, which would imply such leaps in some circumstances, and to criticize contemporary atomism, which implied discontinuous changes of density at the edge of an atom however, according to Hume the contiguity of evens is an important element in our interpretation of their conjunction for being causal.

Others attending to the functionalist points of view are it’s the advocate’s Putnam and Stellars, and its guiding principle is that we can define mental states by a triplet of relations: What typically situations to them, in of what effectual dividing line they have on other mental states and what affects they have on conduct. The definition need not take the form of a simple analysis, but if we could write down the totality of axioms, or postulates, or platitudes that govern our theories about what things are apt to cause (for example) a belief state, what effects it would have on a variety of other mental states, and what effect it us likely to have on behaviour, then we would have done all that is needed to make the state a proper theoretical notion. It would be implicitly defined by these theses. Functionalism is often compared with descriptions of a computer, since according to it mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlying hardware or “realization” of the program the machine is running. The principal advantage of functionalism includes its fit with the way we know of mental states both of ourselves and others are via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items that do not bear mental states might nevertheless imitate the functions that are cited. According to this criticism functionalism is too generous, and would count too many things as having minds. It is also queried whether functionalism is too parochial, able to see mental similarities only when there is causal similarity, when our actual practices of interpretation enable us to ascribe thoughts and desires to persons whose causal structure may be rather different from our own. It may then seem as though beliefs and desires can be “variably realized” in causal architectures, just as much as they can be in different neurophysiological stares.

The anticipation, to revolve os such can find the tranquillity in functional logic and mathematics as function, a relation that auspicates members of one class “X” with some unique member “y” of another “Y.” The associations are written as y = f(x), The class “X” is called the domain of the function, and “Y” its range. Thus “the father of x” is a function whose domain includes all people, and whose range is the class of male parents. Whose range is the class of male parents, but the relation “by that x” is not a function, because a person can have more than one son. “Sine x” is a function from angles of a circle function of its diameter x, . . . and so on. Functions may take sequences x1. Xn as their arguments, in which case they may be thought of as associating a unique member of “Y” with any ordered, n-tuple as argument. Given the equation y f(x1 . . . Xn), x1 . . . Xn is called the independent variables, or argument of the function. That when and if “y” is the dependent variable or value, functions may be many-one in their meaning that differed not of members of “X”, but may take the same member of “Y”. As their value, or one-one when to each member of “X” may take the same member of “Y” as their value, or one-one when to each member of “X” the corresponding distinction of members of “Y.” A function with “X” and “Y” is called a mapping from “X” to”Y” is also called a mapping from “X” to “Y,” written f X ➝ Y, if the function is such that (1) If x, y ∈ X and f(x) = f(y) then x’s = y, then the function is an injection from to Y, if also: (2) If y ∈ Y, then (∃x)(x ∈ X & Y = f(x)). Then the function is a bisection of “X” onto “Y.” A di-jection are both an injection and a sir-jection where a subjection is any function whose domain is “X” and whose range is the whole of “Y.” Since functions ae relations a function may be defined asa set of “ordered” pairs where “x” is a member of “X” sand “y” of “Y.”

One of Frége’s logical insights was that a concept is analogous of a function, as a predicate analogous to the expression for a function (a functor). Just as “the square root of x” takes you from one number to another, so “x is a philosopher’ refers to a function that takes us from his person to truth-values: True for values of “x” who are philosophers, and false otherwise.”

Functionalism can be attached both in its commitment to immediate justification and its claim that all medially justified beliefs ultimately depend on the former. Though, in cases, is the latter that is the position’s weaker point, most of the critical immediately unremitting have been directed ti the former. As much of this criticism has ben directed against some particular from of immediate justification, ignoring the possibility of other forms. Thus much anti-Foundationalist artillery has been derricked at the “myth of the given” to consciousness in pre-conceptual, pre-judgmental mode, and that beliefs can be justified on that basis (Sellars, 1963) The most prominent general argument against immediate justifications is whatever use taken does so if the subject is justified in supposing that the putative justifier has what it takes to do so. Hence, since the justification of the original belief depends on the justification of the higher level belief just specified, the justification is not immediate after all. We may lack adequate support for any such higher level as requirement for justification: And if it were imposed we would be launched on or upon the infinite regress, for a similar requirement would hold equally for the higher belief that the original justifier was efficacious.

The reflexive considerations initiated by functionalism evoke an intelligent system, or mind, may fruitfully be thought of as the result of a number of sub-systems enacting of more simple tasks in coordination switch rounds through one and each of the other. The sub-systems may be envisaged as homunculi, or small, relatively stupid agents. The archetype is a digital computer, where a battery of switches capable of only one response (on or off) can make u a machine that can play chess, write dictionaries, etc.

Nonetheless, we are confronted with the range of putatively self-conscious cognitive states, one might assume that there is a single ability that is presupposed. This is my ability to think about myself, and I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for autographical memories and moral self-understanding. These are ways of thinking about myself.

Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employ in my thoughts about other people and other objects. My knowledge that I am a human being deploys certain conceptual abilities that I can also deploy in thinking that you are a human being. The same holds when I congratulate myself for satisfying the exacting moral standards of autonomous moral agencies. This involves concepts and descriptions that can apply equally to me and to others. On the other hand, when I think about myself, I am also putting to work an ability that I cannot put to work in thinking about other people and other objects. This is precisely the ability to apply those concepts and descriptions to myself. It has become common to refer to this ability as the ability to entertain “I’-thoughts.

What is an, “I”-thought” Obviously, an “I”-thought is a thought that involves self-reference. I can think an, “I”-thought only by thinking about myself. Equally obvious, though, this cannot be all that there is to say on the subject. I can think thoughts that involve a self-reference but are not “I”-thoughts. Suppose I think that the next person to set a parking ticket in the centre of Toronto deserves everything he gets. Unbeknown to be, the very next recipient of a parking ticket will be me. This makes my thought self-referencing, but it does not make it an “I”-thought. Why not? The answer is simply that I do not know that I will be the next person to get a parking ticket in downtown Toronto. Is ‘A’, is that unfortunate person, then there is a true identity statement of the form “I=A,” but I do not know that this identity holds, I cannot be ascribed the thoughts that I will deserve everything I get? And si I am not thinking genuine “I”-thoughts, because one cannot think a genuine “I”-thought if one is ignorant that one is thinking about oneself. So it is natural to conclude that “I”-thoughts involve a distinctive type of self-reference. This is the sort of self-reference whose natural linguistic expression is the first-person pronoun “I,” because one cannot be the first-person pronoun without knowing that one is thinking about oneself.

This is still not quite right, however, because thought contents can be specific, perhaps, they can be specified directly or indirectly. That is, all cognitive states to be considered, presuppose the ability to think about oneself. This is not only that they all have to some commonality, but it is also what underlies them all. We can see is more detail what this suggestion amounts to. This claim is that what makes all those cognitive states modes of self-consciousness is the fact that they all have content that can be specified directly by means of the first person’s pronoun “I” or indirectly by means of the direct reflexive pronoun “he,” such they are first-person contents.

The class of first-person contents is not a homogenous class. There is an important distinction to be drawn between two different types of first-person contents, corresponding to two different modes in which the first person can be employed. The existence of this distinction was first noted by Wittgenstein in an important passage from The Blue Book: That there are two different cases in the use of the word “I” (or, “my”) of which is called “the use as object” and “the use as subject.” Examples of the first kind of use are these” “My arm is broken,” “I have grown six inches,” “I have a bump on my forehead,” “The wind blows my hair about.” Examples of the second kind are: “I see so-and-so,” “I try to lift my arm,” “I think it will rain,” “I have a toothache.” (Wittgenstein 1958)

The explanations given are of the distinction that hinge on whether or not they are judgements that involve identification. However, one can point to the difference between these two categories by saying: The cases of the first category involve the recognition of a particular person, and there is in these cases the possibility of an error, or as: The possibility of can error has been provided for . . . It is possible that, say in an accident, I should feel a pain in my arm, see a broken arm at my side, and think it is mine when really it is my neighbour’s. And I could, looking into a mirror, mistake a bump on his forehead for one on mine. On the other hand, there is no question of recognizing when I have a toothache. To ask “are you sure that it is you who have pains?” would be nonsensical (Wittgenstein, 1958?).

Wittgenstein is drawing a distinction between two types of first-person contents. The first type, which is describes as invoking the use of “I” as object, can be analysed in terms of more basic propositions. Such that the thought “I am B” involves such a use of “I.” Then we can understand it as a conjunction of the following two thoughts” “a is B” and “I am a.” We can term the former a predication component and the latter an identification component (Evans 1982). The reason for braking the original thought down into these two components is precisely the possibility of error that Wittgenstein stresses in the second passages stated. One can be quite correct in predicating that someone is B, even though mistaken in identifying oneself as that person.

To say that a statement “a is B” is subject to error through misidentification relative to the term “a” means the following is possible: The speaker knows some particular thing to be “B,” but makes the mistake of asserting “a is B” because, and only because, he mistakenly thinks that the thing he knows to be “B” is what “a” refers to (Shoemaker 1968).

The point, then, is that one cannot be mistaken about who is being thought about. In one sense, Shoemaker’s criterion of immunity to error through misidentification relative to the first-person pronoun (simply “immunity to error through misidentification”) is too restrictive. Beliefs with first-person contents that are immune to error through identification tend to be acquired on grounds that usually do result in knowledge, but they do not have to be. The definition of immunity to error trough misidentification needs to be adjusted to accommodate them by formulating it in terms of justification rather than knowledge.

The connection to be captured is between the sources and grounds from which a belief is derived and the justification there is for that belief. Beliefs and judgements are immune to error through misidentification in virtue of the grounds on which they are based. The category of first-person contents being picked out is not defined by its subject matter or by any points of grammar. What demarcates the class of judgements and beliefs that are immune to error through misidentification is evidence base from which they are derived, or the information on which they are based. So, to take by example, my thought that I have a toothache is immune to error through misidentification because it is based on my feeling a pain in my teeth. Similarly, the fact that I am consciously perceiving you makes my belief that I am seeing you immune to error through misidentification.

To say that a statement “a is b” is subject to error through misidentification relative to the term “a” means that some particular thing is “b,” because his belief is based on an appropriate evidence base, but he makes the mistake of asserting “a is b” because, and only because, he mistakenly thinks that the thing he justified believes to be “b” is what “a” refers to.

Beliefs with first-person contents that are immune to error through misidentification tend to be acquired on grounds that usually result in knowledge, but they do not have to be. The definition of immunity to error through misidentification needs to be adjusted to accommodate by formulating in terms of justification rather than knowledge. The connection to be captured is between the sources and grounds from which a beef is derived and the justification there is for that belief. Beliefs and judgements are immune to error through misidentification in virtue of the grounds on which they are based. The category of first-person contents picked out is not defined by its subject matter or by any points of grammar. What demarcates the class of judgements and beliefs that ae immune to error through misidentification is the evidence base from which they are derived, or the information on which they are based. For example, my thought that I have a toothache is immune to error through misidentification because it is based on my feeling a pain in my teeth. Similarly, the fact that I am consciously perceiving you makes my belief that I am seeing you immune to error through misidentification.

A suggestive definition is to say that a statement “a is b” is subject to error through misidentification relative to the term “a” means that the following is possible: The speaker is warranted in believing that some particular thing is “b,” because his belief is based on an appropriate evidence base, but he makes the mistake of asserting “a is b” because, and only because, he mistakenly thinks that the thing he justified believes to be “b” is what “a” refers to.

First-person contents that are immune to error through misidentification can be mistaken, but they do have a basic warrant in virtue of the evidence on which they are based, because the fact that they are derived from such an evidence base is closely linked to the fact that they are immune to error thought misidentification. Of course, there is room for considerable debate about what types of evidence base ae correlated with this class of first-person contents. Seemingly, then, that the distinction between different types of first-person content can be characterized in two different ways. We can distinguish between those first-person contents that are immune to error through misidentification and those that are subject to such error. Alternatively, we can discriminate between first-person contents with an identification component and those without such a component. For purposes rendered, in that these different formulations each pick out the same classes of first-person contents, although in interestingly different ways.

All first-person consent subject to error through misidentification contains an identification component of the form “I am a” and employ of the first-person-pronoun contents with an identification component and those without such a component. In that identification component, does it or does it not have an identification component? Clearly, then, on pain of an infinite regress, at some stage we will have to arrive at an employment of the first-person pronoun that does not have to arrive at an employment of the first-person pronoun that does not presuppose an identification components, then, is that any first-person content subject to error through misidentification will ultimately be anchored in a first-person content that is immune to error through misidentification.

It is also important to stress how self-consciousness, and any theory of self-consciousness that accords a serious role in self-consciousness to mastery of the semantics of the first-person pronoun, are motivated by an important principle that has governed much if the development of analytical philosophy. This is the principle that the philosophical analysis of though can only proceed through the principle analysis of language. The principle has been defended most vigorously by Michael Dummett.



Dummett goes on to draw the clear methodological implications of this view of the nature of thought: We communicate thoughts by means of language because we have an implicit understanding of the workings of language, that is, of the principles governing the use of language, it is these principles, which relate to what is open to view in the mind other than via the medium of language that endow our sentences with the senses that they carry. In order to analyse thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp.

Many philosophers would want to dissent from the strong claim that the philosophical analysis of thought through the philosophical analysis of language is the fundamental task of philosophy. But there is a weaker principle that is very widely held as The Thought-Language Principle.

As it stands, the problem between to different roles that the pronoun “he” can play of such oracle clauses. On the one hand, “he” can be employed in a proposition that the antecedent of the pronoun (i.e., the person named just before the clause in question) would have expressed using the first-person pronoun. In such a situation that holds that “he,” is functioning as a quasi-indicator. Then when “he” is functioning as a quasi-indicator, it is written as “he.” Others have described this as the indirect reflexive pronoun. When “he” is functioning as an ordinary indicator, it picks out an individual in such a way that the person named just before the clause of o reality need not realize the identity of himself with that person. Clearly, the class of first-person contents is not homogenous class.

There is canning obviousness, but central question that arises in considering the relation between the content of thought and the content of language, namely, whether there can be thought without language as theories like the functionalist theory. The conception of thought and language that underlie the Thought-Language Principe is clearly opposed to the proposal that there might be thought without language, but it is important to realize that neither the principle nor the considerations adverted to by Dummett, directly succumbing by conclusion to the existent determinates that on that point we cannot be but for that which awaits for the absence of language. According to the principle, the capacity for thinking particular thoughts can only be analysed through the capacity for linguistic expression of those thoughts. On the face of it, however, this does not yield the claim that the capacity for thinking particular thoughts cannot exist without the capacity for their linguistic expression.

Thoughts being wholly communicable not entail that thoughts must always be communicated, which would be an absurd conclusion. Nor does it appear to entail that there must always be a possibility of communicating thoughts in any sense in which this would be incompatible with the ascription of thoughts to a nonlinguistic creature. There is, after all, a primary distinction between thoughts being wholly communicable and it being actually possible to communicate any given thought. But without that conclusion there seems no way of getting from a thesis about the necessary communicability of thought to a thesis about the impossibility of thought without language.

A subject has distinguished self-awareness to the extent that he is able to distinguish himself from the environment and its content. He has distinguished psychological self-awareness to the extent that he is able to distinguish himself as a psychological subject within a contract space of other psychological subjects. What does this require? The notion of a non-conceptual point of view brings together the capacity to register one’s distinctness from the physical environment and various navigational capacities that manifest a degree of understanding of the spatial nature of the physical environment. One very basic reason for thinking that these two elements must be considered together emerges from a point made in the richness of the self-awareness that accompanies the capacity to distinguish the self from the environment is directly proportion are to the richness of the awareness of the environment from which the self is being distinguished. So no creature can understand its own distinction from the physical enjoinment without having an independent understanding of the nature of the physical environment, and since the physical environment is essentially spatial, this requires an understanding of the spatial nature of the physical environment. But this cannot be the whole story. It leaves unexplained why an understanding should be required of this particular essential feature of the physical environment. Afer all, it is also an essential feature of the physical environment that it is composed of a an objects that have both primary and secondary qualities, but thee is n reflection of this in the notion of a non-conceptual point of view. More is needed to understand the significance of spatiality.

First, to take a step back from primitive self-consciousness to consider the account of self-identifying first-person thoughts as given in Gareth Evans’s Variety of Reference (1982). Evens places’ considerable stress on the connection between the form of self-consciousness that he is considering and a grasp of the spatial nature of the world. As far as Evans is concerned, the capacity to think genuine first-person thought implicates a capacity for self-location, which he construes in terms of a thinker’s to conceive of himself as an idea with an element of the objective order. Thought, do not endorse the particular gloss that Evans puts on this, the general idea is very powerful. The relevance of spatiality to self-consciousness comes about not merely because he world is spatial but also because the self-consciousness subject is himself a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be aware that one is a spatial element of the world without a grasp of the spatial nature of the world. Evans tends to stress a dependence in the opposite direction between these notions

The very idea of a perceived objective spatial world brings with it the ideas of the subject for being in the world, which the course of his perceptions due to his changing position in the world and to the more or less stable in the way of the world is. The idea that there is an objective world and the idea that the subject is somewhere cannot be separated, and where he is given by what he can perceive (Evans 1982).

But the main criteria of his work is very much that the dependence holds equally in the opposite direction.

It seems that this general idea can be extrapolated and brought to bar on the notion of a non-conceptual point of view. What binds together the two apparently discrete components of a non-conceptual point of view is precisely the fact that a creature’s self-awareness must be awareness of itself as a spatial bing that acts up and is acted upon by the spatial world. Evans’s own gloss on how a subject’s self-awareness, is awareness of himself as a spatial being involves the subject’s mastery of a simple theory explaining how the world makes his perceptions as they are, with principles like “I perceive such and such, such and such holds at P; So (probably) am P and “I am, such who does not hold at P, so I cannot really be perceiving such and such, even though it appears that I am” (Evans 1982). This is not very satisfactory, though. If the claim is that the subject must explicitly hold these principles, then it is clearly false. If, on the other hand, the claim is that these are the principles of a theory that a self-conscious subject must tacitly know, then affirm strongly would suddenly seem very uninformative as we await within the absence of a specification of the approximative forms of behaviour. That can only be explained by their ascription of such a body of tacit knowledge. We need an account of what it is for a subject to be correctly described as possessing such a simple theory of perception. The point however, is simply that the notion of as non-conceptual point of view as presented, can be viewed as capturing, at a more primitive level, precisely the same phenomenon that Evans is trying to capture with his notion of a simple theory of perception.

But it must not be forgotten that a vital role in this is layed by the subject’s own actions and movements. Appreciating the spatiality of the environment and one’s place in it is largely a function of grasping one’s possibilities for action within the environment: Realizing that f one wants to return to a particular place from here one must pass through these intermediate places, or that if there is something there that one wants, one should take this route to obtain it. That this is something that Evans’s account could potentially overlook emerge when one reflects that a simple theory of perception of the form that described could be possessed and decoyed by a subject that only moves passively, in that it incorporates the dimension of action by emphasizing the particularities of navigation.

Moreover, stressing the importance of action and movement indicates how the notion of a non-conceptual point of view might be grounded in the self-specifying in for action to be found in visual perception. By that in thinking particularly of the concept of an alliance so central to Gibsonian theories of perception. One important type of self-specifying information in the visual field is information about the possibilities for action and reaction that the environment affords the perceiver, by which of bringing into a certain state about non-conceptual first-person contents. The development of a non-conceptual point of view clearly involves certain forms of reasoning, and clearly, we will not have a full understanding of the notion of a non-conceptual point of view until we have an explanation of how this reasoning can take place. The spatial reasoning involved in over which this reasoning takes place. The spatial reasoning involved in developing a non-conceptual point of view upon the world is largely a matter of calibrating different subordinated affiliations into a conjointly integrated representation of the world.

In short, any learned cognitive abilities are contractible out of more primitive abilities already in existence. There is good reason to think that the intrinsic perceptions of the world are innately existing in or belonging to an individual inherently. And so if, the perception of inordinate ambivalency is the key to the combining accumulation of an integrated spatial representation of the environment via the recognition of symmetric equalities, connective transitives, and associate identities, it is precisely conceivable that the capacities implicated in an integrated representation of the world could emerge non-mysteriously from innate abilities.

Nonetheless, there are many philosophers who would be prepared to countenance the possibility of non-conceptual content without accepting that to use the theory of non-conceptual content so solve the paradox of self-consciousness. This is ca more substantial tasks, as the methodology that is adapted rested on the first of the marks of content, namely that content-bearing states serve to explain behaviour in situations where the connections between sensory-data and behavioural production cannot be plotted in a law-like manner (the functionalist theory of the self-reference). As such, not of allowing that every instance of intentional behaviour where there are no such law-like connections between sensory-data and behavioural manners need to be explained by attributing to the creature in question of representational states with first-person contents. Even so, many such instances of intentional behaviour do need to be explained in this way. This offers a way of establishing the legitimacy of non-conceptual first-person contents. What would satisfactorily demonstrate the legitimacy of non-conceptual first-person contents would be the existence of forms of behaviour in pre-linguistic or non-linguistic creatures for which inference to the best understanding or explanation (which in this context includes inference to the most parsimonious understanding, or explanation) demands the ascription of states with non-conceptual first-person contents.

The non-conceptual first-person contents and the pick-up of self-specifying information in the structure of exteroceptive perception provide very primitive forms of non-conceptual self-consciousness, even if forms that can plausibly be viewed as in place rom. birth or shortly afterward. The dimension along which forms of self-consciousness must be compared is the richest of the conception of the self that they provide. All of which, a crucial element in any form of self-consciousness is how it enables the self-conscious subject to distinguish between self and environment - what many developmental psychologists term self-world dualism. In this sense, self-consciousness is essentially a contrastive notion. One implication of this is that a proper understanding of the richness of the conception that we take into account the richness of the conception of the environment with which it is associated. In the case of both somatic proprioception and the pick-up of self-specifying information in exteroceptive perception, there is a relatively impoverished conception of the environment. One prominent limitation is that both are synchronic than diachronic. The distinction between self and environment that they offer is a distinction that is effective at a time but not over time. The contrast between propriospecific and exterospecific invariant in visual perception, for example, provides a way for a creature to distinguish between itself and the world at any given moment, but this is not the same as a conception of oneself as an enduring thing distinguishable over time from an environment that also endures over time.

No comments:

Post a Comment