June 23, 2010

-page 62-

Tyler Burge, has in fact shown that there is a sense for which though t content is dependent on the meanings of words in one’s linguistic communications. Alfred’s use of ‘arthritis’ is fairly standard, except that he is under the misconception that arthritis is not confined to the joints, he also applies the word to rheumatoid ailments not in the joints. Noticing an ailment in his thigh that is symptomatically like the disease in his hands and ankles, he says, to his doctor, ‘I have arthritis in the thigh’: Here Alfred is expressing his false belief that he has arthritis in the thigh. But now consider a counter-factual situation that differs in just one respect (and, whatever it entails): Alfred’s use of ‘arthritis’ is the correct use in his linguistic community. In this situation, Alfred would be expressing a true belief when he says ’I have arthritis in the thigh’. Since the proposition he believes is true while the proposition that he has arthritis in the thigh is false, he believes some other proposition. This shows that standing in the belief relation to a proposition can be partly determined by the meanings of words on one’s public language. The Burge phenomenon seems real, but it would be nice to have a deep explanation of why thought content should be dependent on language in this way.


Finally, there is the old question of whether, or to what extent, a creature who does not understand a natural language can have thoughts. Now it seems pretty compelling that higher mammals and humans raised without language have their behaviour controlled by mental state that are sufficiently like our beliefs, desires, and intentions to share those labels. It also seems easy to imagine non-communicating creatures who have sophisticated mental lives (the yy build weapons, dams, bridges, have clever hunting devices, and so on). At the same time, ascription of particular contents to non-language-using creatures typically seem exercises in loose speaking (does the dog really believe that there is a bone in the yard?), and it is no accident that, as a matter of fact, creatures who do not understand a natural language have at best primitive mental lives. There is no accepting the primitive mental lives of animals account for their failure to master natural language, but the better explanation may be Chomsky’s faculty unique to our species. As regards the inevitably primitive mental life of another wise normal humans raised without language, this might simply be due to the ignorance and lack of intellectual stimulation such a person would be doomed to. On the other hand, it might also be that higher thought requirements of a neural language with structures comparable to that of a natural language, and that such neural language ss are somehow acquired as the ascription of content to the propositional-attitude states of language less creatures is a difficult topic that needs more attention. It is possible of our ascriptions of propositional content, we will realize that these ascriptions are egocentrically based on a similarity to the language in which we express our beliefs. we might then learn that we have no principled basis for ascribing propositional content to a creature who does not speak something, or who does not have internal states with natural-language-like structure. It is somewhat surprising how little we know about thought’s dependence on language.

The Language of Thought hypothesis has a compelling neatness about it. A thought is depicted as a structure of internal representational elements combined in a lawful way, and plays a certain functional role in an internal processing economy. So that the functionalist thinks of mental states and events as causally mediating between a subject’s sensory inputs and that subjects ensuing behaviour. Functionalism itself is the stronger doctrine that what makes a mental state the type of state it is - a pain, a smell of violets, a belief that koalas are dangerous - is the functional relationist bears to the subject’s perceptual stimuli, behavioural responses, and other mental states.

The representational theory of the mind arises with the recognition that thoughts have contents carried by mental representations.

Nonetheless, theorists seeking to account for the mind’s activities have long sought analogues to the mind. In modern cognitive science, these analogues have provided the basses for simulation or modelling of cognitive performance seeing that cognitive psychology simulate one way of testings in a manner comparable to the mind, that offers support for the theory underlying the analogue upon which the simulation is based simulation, however, also serves a heuristic function, suggesting ways for which the mind might gainfully characteristically operate in physical terms. The problem is most obvious in the case of ‘arbitrary’ signs, like words, where it is clear that there is no connection between the physical properties of a word and what it denotes (the problem remains for Iconic representation). What kind of mental representation might support denotation and attribution if not linguistic representation? Perhaps, when thinking within the peculiarities that the mind and attributions thereof, being among the semantic properties of thoughts, are that ‘thoughts’ in having content, posses semantic properties, however, if thoughts denote and precisely attribute, sententialism may be best positioned to explain how this is possible.

Beliefs are true or false. If, as representationalism would have it, beliefs are relations to mental representations, then beliefs must be relations to representations that have truth values among their semantic properties. Beliefs serve a function within the mental economy. They play a central part in reasoning and, thereby, contribute to the control of behaviour. To be rational, a set of beliefs, desires, and actions, also perceptions, intentions, decisions, must fit together in various ways. If they do not, in the extreme case they fail to constitute a mind at all - no rationality, no agent. This core notion of rationality in philosophy of mind thus concerns a cluster of personal identity conditions. That is, ‘Holistic’ coherence requirements on or upon the system of elements comprising a person’s mind, related conception and epistemic or normative rationality are key linkages among the cognitive, as distinct rom qualitative mental stats. The main issue is characterizing these types of mental coherence.

Closely related to thought’s systematicity is its productivity to have a virtual unbounded competence to think ever more complex novel thoughts having certain clear semantic ties to their less complex predecessor. Systems of mental representation apparently exhibit mental representation apparently exhibit the sort of productivity distinctive of spoken languages. Sententialism accommodates this fact by identifying the productive system of mental representation with a language of thought, the basic terms of which are subject to a productive grammar.

Possibly, in reasoning mental representations stand to one another just as do public sentences in valid ‘formal derivations’. Reasoning would then preserve truth of belief by being the manipulation of truth-valued sentential representations according to rules so selectively sensitive to the syntactic properties of the representations as to respect and preserve their semantic properties. The sententialist hypothesis is thus that reasoning is formal inference. It is a process tuned primarily to the structure of mental sentences. Reasoners, then, are things very much like classical programmed computers. Thinking, according to sententialism, may then be like quoting. To quote an English sentence is to issue, in a certain way, a token of a given English sentence type: It is certainly not similarly to issue a token of every semantically equivalent type. Perhaps, thought is much the same. If to think is to token, a sentence in the language of thought, the sheer tokening of one mental sentence need not insure the tokening of another formally distinct equivalents, hence, thought’s opacity.

Objections to the language of thought come from various quarters. Some will not tolerate any edition of representationalism, including Sententialism: Others endorse representationalism while denying that mental representations could involve anything like a language. Representationalism is launched by the assumption that psychological stat es ae relational, that being in psychological state minimally involves being related to something. But perhaps, psychological states are not at all relational. Verbalism begins by denying that expressions of psychological states are relational, infers that psychological states themselves are monadic and, thereby, opposes classical versions of representationalism, including sententialism.

What all this is supposed to show, was that Chomsky and advances in computer science, the 1960s saw a rebirth of ‘mentalistic’ or ‘cognitivist’ approaches to psychology and the study of mind.

These philosophical accounts o cognitive theories and the concepts they invoke are generally much more explicit than the accounts provided by psychologists, and they inevitably smooth over some of the rough edges of scientists’ actual practice. But if the account they give of cognitive theories diverges significantly from the theories that psychologists have just gotten it wrong. There is, however, a very different way in which philosophers have approached cognitive psychology. Rather than merely trying to characterize what cognitive psychology is actually doing, some philosophers try to say what it should and should not be doing. Their goal is not to explicate scientific practice, but to criticize and improve it. The most common target of this critical approach is the use of intentional concepts in cognitive psychology. Intentional notions have been criticized on various grounds. The two taken for our considerations are that they fail to supervene on the physiology of the cognitive agent, and that they cannot be ‘naturalized’.

Perhaps, to an approach that is mos radical is the proposal that cognitive psychology should recast its theories and explanations in a way that does not appeal to intentional properties or ‘syntactic’ properties. Somewhat less radical is the suggestion that we can define a species of representation, which does supervene an organism’s physiology, and that psychological explanations that appeal to ordinary (‘wide’) intentional properties can be replaced by explanations that invoke only their narrow counterparts. Nonetheless, many philosophers have urged that the problem lies in the argument, not in the way that cognitive psychology might be modified. However, many philosophers have urged that the problem lis in the argument, not in the way that cognitive psychology goes about its business. The most common critique of the argument focuses on the normative premise - the one that insists that psychological explanations ought not to appeal to ‘wide’ properties that fail to supervene on physiology. Why should it bot be that psychological explanations appeal to wide properties, the critics ask? : What exactly is wrong with psychological explanations invoking properties that do not supervene on physiology? Various answers have been proposed in the literature, though they typically end up invoking metaphysical principles that are less clear and less plausible than the normative thesis they are supposed to support.

Given to any psychological property that fails to supervene on physiology, it is trivial to characterize a narrow correlated property that does supervene. The extension of the correlate property includes all actual and possible objects in the extension of the original property, plus all actual and possible physiological duplicates of those objects. Theories originally stated in terms of wide psychological properties sated in terms of wide psychological properties can be recast in terms of their descriptive or explanatory power. It might be protested that when characterized in this way, narrow belief and narrow content are not really species of belief and content at all. Nevertheless, it is far from clear how this claim could be defended, or why we should care if it turns out to be right.

The worry about the ‘naturalizability’ of intentional properties is much harder to pin down. According to Fodor, the worry derives from a certain ontological intuition: That there is no place for intentional categories in a physicalistic view of the world, and thus, that the semantic and/or intentionality will prove permanently recalcitrant to integration in the natural order. If, however, intentional properties cannot be integrated into the natural order, then presumably they ought to be banished from serious scientific theorizing. Psychology should have no truck with them. Indeed, if intentional properties have no place in the natural order, then nothing in the natural world has intentional properties, and intentional states do not exist at all. So goes the worry. Unfortunately, neither Fodor nor anyone else has said anything very helpful about what is required to ‘integrate’ intentional properties into the natural order. There are, to be sure, various proposals to be found in the literature. But all of them seem to suffer from a fatal defect. On each account of what is required to naturalize a property or integrate it into the natural order, there are lots of perfectly respectable non-intentional scientific or common-sense properties that fail to meet the standards. Thus, all the proposals that have been made so far, end up being declined and thrown out.

Now, or course, the fact that no one has been able to give a plausible account of what is required to ‘naturalize’ the intentional may indicate nothing more than that their project is a difficult one. Perhaps with further work a more plausible account will be forthcoming. But one might also offer a very different diagnosis of the failure of all accounts of ‘naturalizing’ that have so far been offered. Perhaps the ‘ontological intuition’ that underlies the worry about integrating the intentional into the natural order is simply muddled. Perhaps, there is no coherent criterion of naturalization or naturalizability that all properties invoked in respectable science must meet, as, perhaps, that this diagnosis is the right one. Until those who are worried about the naturalizability of the intentional provide us with some plausible account of what is required of intentional categories if they are to find a place in ‘a physicalistic view of the world’. Possibly we are justified in refusing to take their worry seriously.

Recently, John Searle (1992) has offered a new set of philosophical arguments aimed at showing that certain theories in cognitive psychology are profoundly wrong-headed. The theories that are the target of computational explanations of various psychological capacities - like the capacity to recognize grammatical sentences, or the capacity to judge which of two objects in one ‘s visual field is further away. Typically, these theories are set out in the form of a computer program - a set of rules for manipulating symbols - and the explanations offered for the exercise of the capacity in question is that people’s brains are executing the program. The central claim in Searle’ s critique is that being a symbol or a computational stat e is not an ‘intrinsic’ physical feature of a computer state or a brain state. Rather, being a symbol is an ‘observer relative’ feature. However, Searle maintains, only intrinsic properties of a system can play a role in causal explanations of how they work. Thus, appeal to symbolic or computational states of the brain could not possibly play a role in a ‘casual account of cognition in knowledge’.

All of which, the above aforementioned surveyed, does so that implicate some of the philosophical arguments aimed at showing that cognitive psychology is confusing and in need of reform. My reaction to those arguments was none too sympathetic. In each case, it was maintained to the philological argument that is problematic, not the psychology it is criticizing.

It is fair to ask where we get the powerful inner code whose representational elements need only systematic construction to express, for example, the thought that cyclotrons are bigger and more than vast than black holes. Nonetheless, on this matter, the language of thought theorist has little to say. All that concept learning could be, assuming it is to be some kind of rational process and not due to mere physical maturation or a bump on the head. According to the language of thought theorist, is the trying out of combinations of existing representational elements to see if a given combination captures the sense (as evidenced in its use) of some new concept. The consequence is that concept learning, conceived as the expansion of our representational resources, simply does not happen. What happens instead is that we work with a fixed, innate repertoire of elements whose combination and construction must express any content we an ever learn to understand. And note that it is not the trivial claim that in some sense the resources a system starts with must set limits on what knowledge it can acquire. For these are limits which flow not, for example, from sheer physical size, number of neurons, connectivity of neurons, and so forth. But from a base class of genuinely representational elements. They are more like the limits that being restricted to the propositional calculus would place on the expressive power of a system than, say, the limits that having a certain amount of available memory storage would place on one.

But this picture of representational stasis in which all change consists in the redeployment of existing representational resources, is one that is fundamentally alien to much influential theorizing in developmental psychology. The prime example of a developmentalist who believed in a much stronger formsa much stronger form in genuine expansion of representational power at the very heart of a model of human development. In a similar vein, recent work in the field of connectivism seems to open up the possibility of putting well-specified models of strong representational change back into the centre of cognitive scientific endeavours.

Nonetheless, the understanding of how the underlying combinatoric code ‘develops’ the deep understanding of cognitive processes, than understanding the structure and use of the code itself (though, doubtless the projects would need to be pursued hand-in-hand).

The language of thought depicts thoughts as structures of concepts, for which in turn exist as elements (for any basic concept) or concatenations of elements (for the rest) in the inner code. The intentional states, as common-sense understands them, have both causal and semantic properties and that the combination appears to be unprecedented. However, a further problem about inferential role semantics is that it is, almost invariably, suicidally holistic. it seems, that, if externalism is right, then (some of) the intentional properties of thought are essentially ‘extrinsic’: They essentially involve mind-to-world relations. All and all, in assuming that the computational role of a mental representation is determined entirely by its intrinsic properties, such properties of its weigh t, shape, or electrical conductivity as it might be. , hard to see how the extrinsic properties: Which is to say, that it is hard to see how there could be computationally sufficient conditions for being in an intentional state, for which is to say that it is hard to see how the immediate implementation of intentional laws could be computational.

However, there is little to be said about intrinsic relation s between basic representational items. Even bracketing the (difficult) question of which, if any words in our public language may express content s which have as their vehicles atomic items in the language of thought (an empirical question on which it is to assume that Fodor to be officially agnostic), the question of semantic relations between atomic items in the language of thought remains. Are there any such relations? And if so, in what do they consist? Two thought s are depicted as semantically related just in casse they share elements themselves (like the words of public language on which they are modelled) seem to stand in splendid isolation from one another. An advantage of some connectionist approaches lies precisely in their ability to address questions of the interrelation of basic representational elements (in act, activation vectors) by representing such items as location s in a kind of semantic space. In such a space related contents are always expressed by related representational elements. The connectionist’s conception of significant structure thus goes much deeper than the Fodorian’s. For the connectionist representations need never be arbitrary. Even the most basic representational items will bear non-accidental relations of similarity and difference to one another. The Fodorian, having reached representational bedrock, must explicitly construct any such further relations. They do not come for free as a consequence ee of using an integrated representational space. Whether this is a bad thing or a goo one will depend, of course, on what kind of facts we need to explain. But it is to suspect that representational atomism may turn out to be a conceptual economy that a science of the mind cannot afford.

The approach for ascribing contents must deal with the point that it seems metaphysically possible for here to be something that in actual and counterfactual circumstances behaves as if it enjoys states with content, when in fact it does not. If the possibility is not denied, this approach must add at least that the states with content causally interact in various ways with one-another, and also causally produce intentional action. For most causal theories, however, the radical separation of the causal and rationalizing role of reason-giving explanations is unsatisfactory. For such theorists, where we can legitimately point to an agent’s reasons to explain a certain belief or action, then those features of the agent’s intentional states that render the belief or action reasonable must be causally relevant in explaining how the agent came to believe or act in a way which they rationalize. One way of putting this requirement is that reason-giving states not only cause, but also causally explain their explananda.

On most accounts of causation an acceptance of the causal explanatory role of reason-giving connections requires empirical causal laws employing intentional vocabulary. It is arguments against the possibility of such laws that have, however, been fundamental for those opposing a causal explanatorial view of reasons. What is centrally at issue in these debates is the status of the generalizations linking intentional states to each other, and to ensuing intentional acts. An example of such a generalization would be, ‘If a person desires ‘X’, believes ‘A’ would be a way of promoting ‘X’, is able to ‘A’ and has no conflicting desires than she will do ‘A’. For many theorists such generalizations are between desire, belief and action. Grasping the truth of such a generalization is required to grasp the nature of the intentional states concerned. For some theorists the a priori elements within such generalization s as empirical laws. That, however, seems too quick, for it would similarly rule out any generalizations in the physical sciences that contain a priori elements, as a consequence of the implicit definition of their theoretical kinds in a causal explanation theory. Causal theorists, including functionalist in philosophy of mind, can claim that it is just such implicit definition that accounts for th a priori status of our intentional generalizations.

The causal explanatory approach to reason-giving explanations also requires an account of the intentional content of our psychological states, which makes it possible for such content to be doing such work. It also provides a motivation for the reduction of intentional characteristics to extensional ones, on an attempt to fit intentional causality into a fundamentally materialist world picture. The very nature of the reason-giving relation, however, can be seen to render such reductive projects unrealizable. This, therefore leaves causal theorists with the task of linking intentional and non-intentional levels of description in such a way as to accommodate intentional causality, without either over-determination or a miraculous coincidence of prediction from within distinct causally explanatorial frameworks.

The existence of such causal links could well be written into the minimal core of rational transitions required for the ascription of the contents in question. Yet, it is one thing to agree that the ascription of content involves a species of rational intelligibility. It is another to provide an explanation of this fact. There are competing explanations. One treatment regards rational intelligibility as ultimately dependent on or upon what we find intelligible, or on what we could come to find intelligible in suitable circumstances. This is an analogue of classical treatments of secondary qualities, and as such is a form of subjectivism about content. An alternative position regards the particular conditions for correct ascription of given contents as more fundamental. This alternative states that interpretation must respect these particular conditions. In the case of conceptual contents, this alternative could be developed in tandem with the view that concepts are individuated by the conditions for possessing them. These possession conditions would then function as constraints upon correct interpretation. If such a theorist also assigns references to concepts in such a way that the minimal rational transitions are also always truth-preserving, he will also have succeeded in explaining why such transitions are correct. Under an approach that treats conditions for attribution as fundamental, intelligibility need not be treated as a subjective property. There may be concepts we could never grasp because of our intellectual limitations, as there will be concepts that members of other species could not grasp. Such concepts have their possession conditions, but some thinkers could not satisfy those conditions.

Ascribing states with content to an actual person has to proceed simultaneously with attribution of a wide range of non-rational states and capacities. In general, we cannot understand a person’s reasons for acting as he does without knowing the array of emotions and sensations to which he is subject: What he remembers and what he forgets, and how he reasons beyond the confines of minimal rationality. Even the content -involving perceptual states, which play a fundamental role in individuating content, cannot be understood purely in terms relating to minimal rationality. A perception of the world as being a certain way is not (and could not be) under a subject’s rational control. Though it is true and important that perceptions give reasons for forming beliefs, the beliefs for which they fundamentally provide reasons - observational beliefs about the environment - have contents which can only be elucidated by referencing back to perceptual experience. In this respect (as in others) perceptual states differ from those beliefs and desires that are individuated by mentioning what they provide reasons for judging or doing: For frequently these latter judgements and actions can be individuated without reference back to the states that provide reasons for them.

What is the significance for theories of content of the fact that it is almost certainly adaptive for members of a species to have a system of states with representational contents which are capable of influencing their actions appropriately? According to teleological theories of content, a constitutive account of content - one which says what it is for a state to have a given content - must make use of the notion of natural function and teleology. The intuitive idea is that for a belief state to have a given content ‘p’ is for the belief-forming mechanisms which produced it to have the function b(perhaps derivatively) of producing that state only when it is the case that ‘p’. One issue this approach must tackle is whether it is really capable of associating with states the classical, realistic, verification-transcendent contents which pre-theoretically, we attribute to hem. It is not clear that a content’s holding unknowably can influence the replication of belief-forming mechanics. Bu t even if content itself proves to resist elucidation in terms of natural function and selection. It is still a very attractive view that selection must be mentioned in an account of what associate ss something - such as sentence - with a particular content, even though that content itself may be individuated by other means.

Contents are normally specified by ‘that . . . ‘ clauses, and it is natural to suppose that a content has the same kind of sequential and hierarchical structure as the sentence that specifies it. This supposition would be widely accepted for conceptual content. It is, however, a substantive thesis that all content is conceptual. One way of treating one sort of perceptual content is to regard the content as determined by a spatial type, the type under which the region of space around the perceiver’s must fall if the experience with that content is to represent the environment correctly. The type involves a specification of surfaces and features in the environment, and their distances are directed from the perceiver’s body as origin. Such contents lack any sentence-like structure at all. Supporters of the view that all content is conceptual will argue that the legitimacy of using these spatial type in giving the content of experience does not undermine the thesis that all content is conceptual. Such supporters will say that the spatial type is just a way of capturing what can equally be captured by conceptual components such as ‘that distance’, or ‘that direction’, where these demonstratives are made available by the perception in question. Friends of non-conceptual content will respond that these demonstratives themselves cannot be elucidated without mentioning the spatial types for which lack sentence-like structure.

The actions made rational by content-involving states are actions individuated in part by reference to the agent’s relations to things and properties in his environment. Wanting to see a particular movie and believing that, that building over thee is a cinema showing it makes rational the action of walking in the direction of that building. Similarly, for the fundamental casse of a subject who has knowledge about his environment, a crucial factor in making rational the formations of particular attitude is the way the world is around him. One may expect, the n, that any theory that links the attribution of contents to states with rational intelligibility will be commit to the thesis that the content of a person’s states depends in part on his relations to the world outside him. We call this thesis the thesis of externalism about content.

Externalism about content should steer a middle course. On the one had, it should not ignore the truism that the relations of rational intelligibility involve not things and properties in the world, but the way they are presented as being - an externalist should use some version of Frége’s notion of mode of presentation. On the other hand, the externalist for whom considerations of rational intelligibility are pertinent to the individuation of content is likely to insist that we cannot dispense with the notion of something in the world - being presented in a certain way. If we dispense with the notion of something external bing presented in a certain way, we are in danger of regarding attributions of content as having no consequence for how an individual relates to his environment, in a way that is quite contrary to our intuitive understanding of rational intelligibility.

Externalism comes in more and fewer extreme versions. Consider a mind of a thinker who sees or perceives of a particular pear, and thinks a thought that the pear is ripe, where the demonstrative way of thinking of the pear expressed by ‘that pear’ is made available to him by his perceiving the pear. Some philosophers have held that the thinker would be employed of thinking were he perceiving a different perceptually based way of thinking were he perceiving a different pear. But externalism need not be committed to this. In the perceptual state that makes available the way on thinking pear is presented as being in a particular distance, and as having certain properties. A position will still be externalist if it holds that what is involved in the pear’s being so presented is the collective role of these components of content in making intelligible in various circumstances the subject’s relations to environmental directions distance and properties of object. This can be held without committed to the object-dependence of the way of thinking expressed by ‘that pear’. This less strenuous form of externalism must, though, address the epistemological arguments offered in favour of the more extreme versions, to the effect that only they are sufficiently world-involving.

The apparent dependence of the content of belief on factors external to the subject can be formulated as a failure of supervenience of belief content upon facts about what is the case within the boundaries of the subject’s body. To claim that such supervenience fails is to make a model claim: That there can be two persons the same in respect of their internal physical states (and so in respect to those of their dispositions that are independent of content-involving states), who nevertheless differ in respect of which beliefs they have. Hilary Putnam (1926-), the American philosopher of science, who became more prominent in his writing about ‘Reason, Truth, and History’ (1981) marked of a subtle position that he call’s internal realism, initially related to a n ideal limit theory of truth, and apparently maintaining affinities with verificationism, but in subsequent work more closely aligned with minimalism. Putnam’s concern in the later period has largely been to deny any serious asymmetry between truth and knowledge as obtained in moral s, and even theology.

Nonetheless, in the case of content-involving perceptual states. It is a much more delicate matter to argue for the failure of supervenience. The fundamental reason for this is answerable not only to factors ion the input side - what in certain fundamental cases causing the subject to be in the perceptual state - but also to factors on the perceptual state - but also to factors on the output side - what the perceptual state is capable of helping to explain amongst the subject’s actions. If differences in perceptual content always involve differences in bodily-described actions in suitable counter-factual circumstances, and if these different actions always will after all be supervenience of content-involving perceptual states on internal states. But if this should turn ut to be so, that is not a refutation of externalism for perceptual contents. A different reaction to this situation of dependence ads one of supervenience is in some cases too strong. A better is given by a constitutive claim: That what makes a state have the content it does are certain of its complex relations to external states of affairs. This can be held without commitment to the model separability of certain internal states from content-involving perceptual states.

Attractive as externalism about content ma be, it has been vigorously contested notably by the American philosopher of mind Jerry Alan Fodor (1935-), who is known for a resolute realism about the nature of mental functioning. Taking the analogy between thought and computation seriously, Fodor believes that mental representations should be conceived as individual states with their own identities and structure, like formulae transformed by processes of computation or thought. His views are frequently contrasted with those of ‘Holist’ such as Herbert Donald Davidson (1917-2003), although Davidson is a defender of the doctrines of the ‘indeterminacy’ of radical translation and the ‘inscrutability’ of reference, his approach has seemed to many to offer some hope of identifying meaning as a respectable notion, even within a broadly ‘extensional’ approach to language. Davidson is also known for rejection of the idea of a ‘conceptual scheme’, thought of as something peculiar to one language or in one way of looking at the world, arguing that where the possibility of translation stops so does the coherence of the idea that there is anything to translate. Nevertheless, Fodor (1981) endorses the importance of explanation by content-involving states, but holds that content must be narrow, constituted by internal properties of an individual.

One influential motivation for narrow content is a doctrine about explanation that molecule-for-molecule counter-parts must have the same causal powers. Externalists have replied that the attributions of content-involving states presuppose some normal background or context for the subject of the states, and that content-involving explanations commonly take the presupposed background for granted. Molecular counter-parts can have different presuppose d backgrounds, and their content-involving states may correspondingly differ. Presupposition of a background of external relations in which something stands is found in other sciences outside those that employ the notion of content, including astronomy and geology.

A more specific concern of those sympathetic to narrow content is that when content is externally individuated, the explanatorial principles postulated in which content-involving states feature will be a priori in some way that is illegitimate. For instance, it appears to be a priori that behaviour is intentional under some description involving the concept ‘water’ will be explained by mental states that have the externally individuated concept about ‘water’ in their content. The externalist about content will have a twofold response. First, explanations in which content-involving states are implicated will also include explanations of the subject’s standing in a particular relation to the stuff water itself, and for many such relations, it is in no way a priori that the thinker’s so standing has a psychological explanation at all. Some such cases will be fundamental to the ascription of externalist content on treatments that tie such content to the rational intelligibility of actions relationally characterized. Second, there are other cases in which the identification of a theoretically postulated state in terms of its relations generates a priori truths, quite consistently with that state playing a role in explanation. It arguably is phenotypical characteristic, then it plays a causal role in the production of that characteristic in members of the species in question. Far from being incompatible with a claim about explanation, the characterization of genes that would make this a priori also requires genes to have a certain casual explanatory role.

Of anything, it is the friend of narrow content who has difficulty accommodating the nature content are fit to explain bodily movements in environment-involving terms. But we note, that the characteristic explananda of content-involving states, such as walking towards the cinema, are characterized in environment-involving terms. How is the theorist of narrow content to accommodate this fact? He may say, that we merely need to add a description of the context of the bodily movement, which ensures that the movement is in fact a movement toward the cinema. But mental property of an event to an explanation of that event does not give one an explanation of the event’s having that environmental property, let alone a content-involving explanation of the fact. The bodily movement may also be a walking in the direction of Moscow, but it does not follow that we have a rationally intelligible explanation of the event as a walking in the direction of Moscow. Perhaps the theorist of narrow content would at this point add further relational proprieties of the internal states of such a kind that when his explanation is fully supplemented, it sustains the same counter-factuals and predications as does the explanation that mentions externally individuated content. But such a fully supplemented explanation is not really in competition with the externalist’s account. It begins to appear that if such extensive supplementation is adequate to capture the relational explananda it is also sufficient to ensure that the subject is in states with externally individuated contents. This problem, however, affects not only treatments of content as narrow, but any attempt to reduce explanation by content-involving states to explanation by neurophysiological states.

One of the tasks of a sub-personal computational psychology is to explain how individuals come to have beliefs, desires, perceptions and other personal-level content-involving properties. If the content of personal-level states is externally individuated, then the contents mentioned in the sub-personal psychology that is explanatory of those personal states must also be externally individuated. One cannot fully explain the presence of an externally individuated state by citing only states that are internally individuated. On an externalist conception of sub-personal psychology, a content-involving computation commonly consists in the explanation of some externally individuated states by other externally individuated states.

This view of sub-personal content has, though, to be reconciled with the fact that the first states in an organism involved in the explanation - retinal states in the case of humans - are not externally individuated. The reconciliation is affected by the presupposed normal background, whose importance to the understanding of content we have already emphasized. An internally individuated state, when taken together with a presupposed external background, can explain the occurrence of an externally individuated state.

An externalist approach to sub-personal content also has the virtue of providing a satisfying explanation of why certain personal-level states are reliably correct in normal circumstances. If the sub-personal computations that cause the subject to be in such states are reliably correct, and the final commutation is of the content of the personal-level state, then the personal-level state will be reliably correct. A similar point applies to reliable errors, too, of course. In either case, the attribution of correctness condition to the sub-personal state is essentially to the explanation.

Externalism generates its own set of issues that need resolution, notably in the epistemology of attributions. A content-involving state may be externally individuated, but a thinker does not need to check on his relations to his environment to know the content of his beliefs, desires, and perceptions. How can this be? A thinker’s judgements about his beliefs are rationally responsive to his own conscious beliefs. It is a first step to note that a thinker’s beliefs about his own beliefs will then inherit certain sensitivities to his environment that are present in his original (first-order) beliefs. But this is only the first step, for many important questions remain. How can there be conscious externally individuated states at all? Is it legitimate to infer from the content of one’s states to certain general facts about one’s environment, and if so, how, and under what circumstances?

Ascription of attitudes to others also needs further work on the externalist treatment. In order knowledgeably to ascribe a particular content-involving attitude to another person, we certainly do not need to have explicit knowledge e of the external relations required for correct attribution of the attitude. How then do we manage it? Do we have tacit knowledge of the relation on which content depends, or do we in some way take our own case as primary, and think of the relations as whatever underlies certain of our own content-involving states? In the latter, in what wider view of other-ascription should this point be embedded? Resolution of these issues, like so much else in the theory of content, should provide us with some understanding of the conception each one has of himself as one mind amongst many, interacting with a common world which provides the anchor for the ascription of content.

There seems to have the quality of being an understandably comprehensive characteristic as ‘thought’, attributes the features of ‘intentionality’ or ‘content’: In thinking, as one thinks about certain things, and one thinks certain things about those things - one entertains propositions that maintain a position as promptly categorized for the states of affairs. Nearly all the interesting properties of thoughts depend upon their ‘content’: Their being coherent or incoherent, disturbing or reassuring, revolutionary or banal, connected logically or illogically to other thoughts. It is thus, hard to see why we would bother to talk of thought at all unless we were also prepared to recognize the intentionality of thought. So we are naturally curious about the nature of content: We want to understand what makes it possible, what constitutes it, what it stems from. To have a theory of thought is to have a theory of its content.

Four issues have dominated recent thinking about the content of thought, each may be construed as a question about what thought depends on, and about the consequences of its so depending (or not depending). These potential dependencies concern: (1) The world outside of the thinker himself, (2) language, (3) logical truth (4) consciousness. In each casse the question is whether intentionality is essentially or accidentally related to the items mentioned: Does it exist, that is, only by courtesy of the dependence of thought on the aid items? And this question determining what the intrinsic nature of thought is.

Thoughts are obviously about things in the world, but it is a further question whether they could exist and have the content they do whether or not their putative objects themselves exist. Is what I think intrinsically dependent on or upon the world in which I happen to think it? This question was given impetus and definition by a thought experiment due to Hilary Putnam, concerning a planet called ‘twin earth’. On twin earth there live thinkers who are duplicates of us in all internal respects but whose surrounding environment contain different kinds of natural objects. The suggestion then is that what these thinkers refer to and think about is individuality dependent upon their actual environment, so that where we think about cats when we say ‘cat’ they think about that word - the different species that actually sits on their mats and so on. The key point is that since it is impossible to individuate natural kinds like cats solely by reference to the way they strike the people who think about them cannot be a function simply of internal properties of the thinker. The content, here, is relational in nature, is fixed by external facts as they bear upon the thinker. Much the same point can be made by considering repeated demonstrative reference to distinct particular objects: What I refer to when I say ‘that bomb’, of different bombs, depends on or upon the particular bomb in front of me and cannot be deduced from what is going on inside me. Context contributes to content.

Inspired by such examples, many philosophers have adopted an ‘externalist’ view of thought content: Thoughts are not antonymous states of the individual, capable of transcending the contingent facts of the surrounding world. One is therefore not free to think whatever one’s liking, as it was, whether or not the world beyond cooperates in containing suitable referents for those thoughts. And this conclusion has generated a number of consequential questions. Can we know our thoughts with special authority, given that they are thus hostage to external circumstances? How do thoughts cause other thoughts and behaviour, given that they are not identical with an internal states we are in? What kind of explanation are we giving when we cite thoughts? Can there be a science of thought if content does not generalize across environments? These questions have received many different answers, and, of course, not everyone agrees that thought has the kind of world-dependence claimed. Nonetheless, what has not been considered carefully enough, is the scope of the externalist thesis - whether it applies to all forms of thought, all concepts. For unless this questions be answered affirmatively we cannot rule out the possibility that though in general depends on there being some thought that is purely internally determined, so that the externally fixed thoughts are a secondary phenomenon. What about thoughts concerning one’s present sensory experience, or logical thoughts or ethical thought? Could there, indeed, be a thinker for whom internalism was generally correct? Is external individuation the rule or the exception? And might it take the rule or the exception? And might it take different forms in different cases?

Since words are also about things, it is natural to ask how their intentionality is connected to that of thoughts. Two views have been advocated: One view takes thought content to be self-subsisting relative to linguistic content, with the latter dependent upon the former: the other view takes thought comment to be derivative upon linguistic content, so that there can be no thought without a bedrock of language. Thus, arise controversies about whether animals really think, being non-speakers, or computers really use language. , being non-thinkers. All such question depend critically upon what one is to mean by ‘language’. Some hold that spoken language is unnecessary for thought but that there must be an inner language in order for thought to be possible, while others reject the very idea of an inner language, preferring to suspend thought from outer speech. However, it is not entirely clear what it amounts to assert (or deny)that there is an inner language of thought. If it means merely that concepts (thought constituents) are structured in such a way as to be isomorphic with spoken language, then the claim is trivially true, given some natural assumptions. But if it means that concepts just are ‘syntactic’ items orchestrated into springs of the same, then the claim is acceptable only in so far as syntax is an adequate basis for meaning - which, on the face of it, it is not. Concepts no doubt have combinatorial powers compactable to those of words, but the question is whether anything else can plausible be meant by the hypothesis of an inner language.

On the other hand, it appears undeniable that spoken language does have autonomous intentionality, but instead derives its meaning from the thought of speakers - though language may augment one’s conceptual capacities. So thought cannot postdate spoken language. The truth seems to be that in human psychology speech and thought are interdependent in many ways, but there is no conceptual necessity about this. The only ‘language’ on which thought essentially depends itself: Thought indeed, depends upon there being insoluble concepts that can join with others to produce complete propositional statements. But this is merely to draw attention to a property any system of concepts must have: It is not to say what concepts are or how they succeed in moving between thoughts as they so. Appeals to language at this point, are apt to flounder on circularity, since words take on the power of concepts only insofar as they express them. Thus, there seems little philosophical illumination to be got from making thought depend on or upon language.

This third dependency question is prompted by the reflection that, while people are no doubt often irrational, woefully so, there seems to be sme kind of intrinsic limit to their unreason. Even the sloppiest thinker will not infer anything from anything. To do so is a sign of madness The question then is what grounds this apparent concession to logical prescription. Whereby, the hold of logic over thought? For the dependence there can seem puzzling: Why should the natural causal processes relations of logic, I am free to flout the moral law to any degree I desire, but my freedom to think unreasonably appears to encounter an obstacle in the requirement of logic? My thoughts are sensitive to logical truth in somewhat the way they are sensitive to the world surrounding me: They have not the independence of what lies outside my will or self that I fondly imagined. I may try to reason contrary to modus ponens, but my efforts will be systematically frustrated. Pure logic takes possession of my reasoning processes and steers them according to its own indicates, variably, of course, but in a systematic way that seems perplexing.

One view of tis is that ascriptions of thought are not attempts to map a realm of independent causal relations, which might then conceivably come apart from logical relations, but are rather just a useful method of summing up people’s behaviours. Another view insists that we must acknowledge that thought is not a natural phenomenon in the way merely physical facts are: Thoughts are inherently normative in their nature, so that logical relations constitute their inner essence. Thought incorporates logic in somewhat the way externalists say it incorporates the world. Accordingly, the study of thought cannot be a natural science in the way the study of (say) chemistry compounds is. Whether this view is acceptable, depends upon whether we can make sense of the idea that transitions in nature, such as reasoning appear to be, can also be transitions in logical space, i.e., be confined by the structure of that space. What must be thought, in such that this combination n of features is possible. Put differently, what is it for logical truth to be self-evident?

This dependency question has been studied less intensively than the previous three. The question is whether intentionality ids dependent on or upon consciousness for its very existence, and if so why. Could our thoughts have the very content they now have if we were not to be consciousness beings at all? Unfortunately, it is difficult to see how to mount an argument in either direction. On one hand, it can hardly be an accident that our thoughts are conscious and that this content is reflected in the intrinsic condition of our state of consciousness: It is not as if consciousness leaves off where thought content begins - as it does with, say, the neural basis of thought. Yet, on the other hand, it is by no means clear what it is about consciousness that links it to intentionality in this way. Much of the trouble here stems from our exceedingly poor understanding of the nature of consciousness could arise from grain tissue (the mind-body problem), so that we fill to grasp the manner in which conscious states bear meaning. Perhaps content is fixed by extra-conscious properties and relations and only subsequently shows up in consciousness, as various naturalistic reductive accounts would suggest; Or perhaps, consciousness itself plays a more enabling role, allowing meaning to come into the word, hard as this may be to penetrate. In some ways the question is analogous to, say, the properties of ‘pain’: Is the aversive property of pain, causing avoidance behaviour and so forth, essentially independent of the conscious state of feeling, or is it that pain, could only have its aversion function in virtue of the conscious feedings? This is part of the more general question of the epiphenomenal character of consciousness: Is conscious awareness just a dispensable accompaniment of some mental feature - such as content or causal power - or is it that consciousness is structurally involved in the very determination of the feature? It is only too easy to feel pulled in both directions on this question, neither alterative being utterly felicitous. Some theorists, suspect that our uncertainty over such questions stems from a constitutional limitation to human understanding. We just cannot develop the necessary theoretical tools which to provide answers to these questions, so we may not in principle be able to make any progress with the issue of whether thought depends upon consciousness and why. Certainly our present understanding falls far short of providing us with any clear route into the question.

It is extremely tempting to picture thought as some kind of inscription in a mental medium and of reasoning as a temporal sequence of such inscriptions. On this picture all that a particulars thought requires in order to exist is that the medium in question should be impressed with the right inscription. This makes thought independent of anything else. On some views the medium is conceived as consciousness itself, so that thought depends on consciousness as writing depends on paper and ink. But ever since Wittgenstein wrote, we have seen that this conception of thought has to be mistaken, in particular of intentionality. The definitive characteristics of thought cannot be captured within this model. Thus, it cannot make room for the idea of intrinsic world-dependence. Since any inner inscription would be individualatively independent of items outside the putative medium of thought. Nor can it be made to square with the dependence of thought on logical pattens, since the medium could be configured in any way permitted by its intrinsic nature, within regard for logical truth - as sentences can be written down in any old order one likes. And it misconstrues the relation between thought and consciousness, since content cannot consist in marks on the surface of consciousness, so to speak. States of consciousness do contain particular meanings but not as a page contains sentences: The medium conception of the relation between content and consciousness is thus deeply mistaken. The only way to make meaning enter internally into consciousness is to deny that it as a medium for meaning to be expressed. However, it is marked and noted as the difficulty to form an adequate conception of how consciousness does carry content - one puzzle being how the external determinants of content find their way into the fabric of consciousness.

Only the alleged dependence of thought upon language fits the naïve tempting inscriptional picture, but as we have attested to, this idea tends to crumble under examination. The indicated conclusion seems to be that we simply do not posses a conception of thought that makes its real nature theoretically comprehensible: Which is to say, that we have no adequate conception of mind? Once we form a conception of thought that makes it seem unmysterious as with the inscriptional picture. It turns out to have no room for content as it presents itself: While building in a content as it is leaves’ us with no clear picture of what could have such content. Thought is ‘real’, then, if and only if it is mysterious.

In the philosophy of mind ‘epiphenomenalism’ means that while there exist mental events, states of consciousness, and experience, they have themselves no causal powers, and produce no effect on the physical world. The analogy sometimes used is that of the whistle on the engine that makes the sound (corresponding to experiences), but plays no part in making the machinery move. Epiphenomenalism is a drastic solution to the major difficulties the existence of mind with the fact that according to physics itself only a physical event can cause another physical event an epiphenomenalism may accept one-way causation, whereby physical events produce mental events, or may prefer some kind of parallelism, avoiding causation either between mind and body or between body and mind. And yet, occasionalism considers the view that reserves causal efficacy to the action of God. Events in the world merely form occasions on which God acts so as to bring about the events normally accompanying them, and thought of as their effects. Although, the position is associated especially with the French Cartesian philosopher Nicolas Malebranche (1638-1715), inheriting the Cartesian view that pure sensation has no representative power, and so adds the doctrine that knowledge of objects requires other representative ideas that are somehow surrogates for external objects. These are archetypes of ideas of objects as they exist in the mind of God, so that ‘we see all things in God’. In the philosophy of mind, the difficulty to seeing how mind and body can interact suggests that we ought instead to think of hem as two systems running in parallel. When I stub my toe, this does so cause pain, but there is a harmony between the mental and the physical (perhaps due yo God) that ensures that there will be a simultaneous pain, when I form an intention and then act, the same benevolence ensures that my action is appropriated to my intention. The theory has never been wildly popular, and many philosophers would say that it was the result of a misconceived ‘Cartesian dualism’. Nonetheless, a major problem for epiphenomenalism is that if mental events have no causal relationship it is not clear that they can be objects of memory, or even awareness.

The metaphor used by the founder of revolutionary communism, Karl Marx (1805-1900) and the German social philosopher and collaborator of Marx, Friedrich Engels (1820-95), to characterize the relation between the economic organization of society, which is its base, an the political, legal, and cultural organizations and social consciousness of a society, which is the super-structure. The sum total of the relations of production of material life conditions the social political, and intellectual life process in general. The way in which the base determines of much debate with writers from Engels onwards concerned to distance themselves from that the metaphor might suggest. It has also in production are not merely economic, but involve political and ideological relations. The view that all causal power is centred in the base, with everything in the super-structure merely epiphenomenal. Is sometimes called economicism? The problems are strikingly similar to those that are arisen when the mental is regarded as supervenience upon the physical, and it is then disputed whether this takes all causal power away from mental properties.

Just the same, for if, as the causal theory of action implies, intentional action requires that a desire for something and a belief about how to obtain what one desires play a causal role in producing behaviour, then, if epiphenomenalism is true, we cannot perform intentional actions. Nonetheless, in describing events that happen does not of itself permit us to talk of rationality and intention, which are the categories we may apply if we conceive of them as actions. Ewe think of ourselves not only passively, as creatures within which things happen, but actively, as creatures that make things happen. Understanding this distinction gives rise to major problems concerning the nature of agency, of the causation of bodily events by mental events, and of understanding the ‘will’ and ‘free will’. Other problems in the theory of action include drawing the distinction between the structures involved when we do one thing ‘by’ doing another thing. Even the placing and dating of action can give ruse to puzzles, as one day and in one place, and the victim then dies on another day and in another place. Where and when did the murder take place? The notion of applicability inherits all the problems of ‘intentionality’. The specific problems it raises include characterizing the difference between doing something accidentally and doing it intentionally. The suggestion that the difference lies in a preceding act of mind or volition is not very happy, since one may automatically do what is nevertheless intensional, for example, putting one’s foot forwards while walking. Conversely, unless the formation of a volition is intentional, and thus raises the same questions, the presence of a violation might be unintentional or beyond one’s control. Intentions are more finely grained than movements, one set of movements may both be answering the question and starting a war, yet the one may be intentional and the other not.

However, according to the traditional doctrine of epiphenomenalism, things are not as they seem: In reality, mental phenomena can have no causal effects: They are casually inert, causally impotent. Only physical phenomena are casually efficacious. Mental phenomena are caused by physical phenomena, but they cannot cause anything. In short, mental phenomena are epiphenomenal.

The epiphenomenalist claims that mental phenomena seem to be causes only because there are regularities that involve types (or kinds) of mental phenomena. For example, instances of a certain mental type ‘M’, e.g., trying to raise one’s arm might tend to be followed by instances of a physical type ‘P’, e.g., one’s arms rising. To infer that instances of ‘M’ tend to cause instances of ‘P’ would be, however, to commit the fallacy of post hoc, ergo propter hoc. Instances of ‘M’ cannot cause instances of ‘P’: Such causal transactions are casually impossible. P-typ e events tend to be followed by M-type events because instances of such events are dual-effects of common physical causes, not because such instances causally interact. Mental events and states can figure in the web of causal relations only as effects, never as causes.

Epiphenomenalism is a truly stunning doctrine. If it is true, then no pain could ever be a cause of our wincing, nor could something’s looking red to us ever be a cause of our thinking that it is red. A nagging headache could never be a cause of a bad mood. Moreover, if the causal theory of memory is correct, then, given epiphenomenalism, we could never remember our prior thoughts, or an emotion we once felt, or a toothache we once had, or having heard someone say something, or having seen something: For such mental states and events could not be causes of memories. Furthermore, epiphenomenalism is arguably incompatible with the possibility of intentional action. For if, s the casual theory of action implies, intentional action requires that a desire for something and a belief about how to obtain what one desires lay a causal role in producing behaviour, then, if epiphenomenalism is true, we cannot perform intentional actions. As it strands, to accommodate this point - most obviously, specifying the circumstances in which belief-desire explanations are to be deployed. However, matter are not as simple as the seem. Ion the functionalist theory, beliefs are casual functions from desires to action. This creates a problem, because all of the different modes of psychological explanation appeal to states that fulfill a similar causal function from desire to action. Of course, it is open to a defender of the functionalist approach to say that it is strictly called for beliefs, and not, for example, innate releasing mechanisms, that interact with desires in a way that generates actions. Nonetheless, this sort of response is of limited effectiveness unless some sort of reason-giving for distinguishing between a state of hunger and a desire for food. It is no use, in that it is simply to describe desires as functions from belief to actions.

Of course, to say the functionalist theory of belief needs to be expanded is not to say that it needs to be expanded along non-functionalist lines. Nothing that has been said out the possibility that a correct and adequate account of what distinguishes beliefs from non-intentional psychological states can be given purely in terms of respective functional roles. The core of the functionalist theory of self-reference is the thought that agents can have subjective beliefs that do not involve any internal representation of the self, linguistic or non-linguistic. It is in virtue of this that the functionalist theory claim to be able to dissolve such the paradox. The problem that has emerged, however, is that it remains unclear whether those putative subjective beliefs really are beliefs. Its thesis, according to which all cases of action to be explained in terms of belief-desire psychology have to be explained through the attribution of beliefs. The thesis is clearly at work as causally given to the utility conditions, and hence truth conditions, of the belief that causes the hungry creature facing food to eat what I in front of him - thus, determining the content of the belief to be. ‘There is food in front of me’, or ‘I am facing food’. The problem, however, is that it is not clear that this is warranted. Chances would explain by the animal would eat what is in front of it. Nonetheless, the animal of difference, does implicate different thoughts, only one of which is a purely directive genuine thought.

Now, the content of the belief that the functionalist theory demands that we ascribe to an animal facing food is ‘I am facing food now’ or ‘There is food in front of me now’. These are, it seems clear, structured thoughts, so too, for that matter, is the indexical thought ‘There is food here now’. The crucial point, however, is that the casual function from desires to actions, which, in itself, is all that a subjective belief is, would be equally well served by the unstructured thought ‘Food’.

At the heart of the reason-giving relation is a normative claim. An agent has a reason for believing, acting and so forth. If, given here to other psychological states this belief/action is justified or appropriate. Displaying someone’s reasons consist in making clear this justificatory link. Paradigmatically, the psychological states that prove an agent with logical states that provide an agent with treason are intentional states individuated in terms of their propositional content. There is a long tradition that emphasizes that the reason-giving relation is a logical or conceptual representation. In the case of reason for actions the premises of any reasoning are provided by intentional states other than belief.

Notice that we cannot then, assert that epiphenomenalism is true, if it is, since an assertion is an intentional speech act. Still further, if epiphenomenalism is true, then our sense that we are enabled is true, then our sense that we are agents who can act on our intentions and carry out our purposes is illusory. We are actually passive bystanders, never the agent in no relevant sense is what happens up to us. Our sense of partial causal control over our exert no causal control over even the direction of our attention. Finally, suppose that reasoning is a causal process. Then, if epiphenomenalism is true, we never reason: For there are no mental causal processes. While one thought may follow anther, one thought never leads to another. Indeed, while thoughts may occur, we do not engage in the activity of thinking. How, the, could we make inferences that commit the fallacy of post hoc, ergo propter hoc, or make any inferences at all for that matter?

As neurophysiological research began to develop in earnest during the latter half of the nineteenth century. It seemed to find no mental influence on what happens in the brain. While it was recognized that neurophysiological events do not by themselves casually determine other neurophysiological events, there seemed to be no ‘gaps’ in neurophysiological causal mechanisms that could be filled by mental occurrences. Neurophysiological appeared to have no need of the hypothesis that there are mental events. (Here and hereafter, unless indicated otherwise, ‘events’ in the broadest sense will include states as well as changes.) This ‘no gap’ line of argument led some theorists to deny that mental events have any casual effects. They reasoned as follows: If mental events have any effects, among their effects would be neurophysiological ones: Mental events have no neurophysiological effects: Thus, mental events have no effect at all. The relationship between mental phenomena and neurophysiological mechanisms is likened to that between the steam-whistle which accompanies the working of a locomotive engine and the mechanisms of the engine, just as the steam-whistle which accompanies the working of a locomotive engine and the mechanisms of the engine: just as the steam-whistle is an effect of the operations of the mechanisms but has no casual influence on those operations, so too mental phenomena are effects of the workings of neurophysiological mechanisms, but have no causal influence on their operations. (The analogy quickly breaks down, as steam whistles have casual effects but the epiphenomenalist alleges that mental phenomenons have no causal effects at all.)

An early response to this ‘no gap’ line of argument was that mental events (and states) are not changes in (and states of) an immaterial Cartesian substance e, they are, rather changes in (and states of) the brain. While mental properties or kinds are not neurophysiological properties or kinds, nevertheless, particular mental events are neurophysiological events. According to the view in question, a given events can be an instance of both a neurophysiological type and a mental type, and thus be both a mental event and a neurophysiological event. (Compare the fact that an object might be an instance of more than one kind of object: For example, an object might be both a stone and a paper-weight.) It was held, moreover, that mental events have causal effects because they are neurophysiological events with causal effects. This response presupposes that causation is an ‘extensional’ relation between particular events, that if two events are causally related, they are so related however they are typed (or described). Given that assumption is today widely held. And given that the causal relation is extensional, if particular mental events are indeed, neurophysiological events are causes, and epiphenomenalism is thus false.

This response to the ‘no gap’ argument, however, prompts a concern about the relevance of mental properties or kinds to causal relations. And in 1925 C.D. Broad tells us that the view that mental events are epiphenomenal is the view that mental events either (a) do not function at all as causal-factors, or hat (b) if they do, they do so in virtue of their physiological characteristics and not in virtue of their mental characteristics. If particular mental events are physiological events with causal effects, then mental events function as case-factors: They are causes, however, the question still remains whether mental events are causes in virtue of their mental characteristics. , yet, neurophysiological occurrences without postulating mental characteristics. This prompts the concern that even if mental events are causes, they may be causes in virtue of their physiological characteristics. But not in virtue of their mental characteristics.

This concern presupposes, of course, that events are causes in virtue of certain of their characteristics or properties. But it is today fairly widely held that when two events are causally related, they are so related in virtue of something about each. Indeed, theories of causation assume that if two events ‘x’ and ‘y’ are causally related, and two other events ‘a’ and ‘b’ are not, then there must be some difference between ‘x’ and ‘y’ and ‘a’ and ‘b’ in virtue of which ‘x’ and ‘y’ are. But ‘a’ and ‘b’ are not, causally related. And they attempt to say what that difference is: That is, they attempt to say what it is about causally related events in virtue of which they are so related. For example, according to so-called ‘nomic subsumption views of causation’, causally related events will be so related in virtue of falling under types (or in virtue of having properties) that figure in a ‘causal law’. It should be noted that the assumption that casually related events are so related in virtue of something about each is compatible with the assumption that the causal relation is an ‘extensional’ relationship between particular events. The weighs-less-than relation is an extensional relation between particular objects: If O weighs less than O*, then O and O* are so related, have them of a typed (or characterized, or described, nevertheless, if O weighs less than O*, then that is so in virtue of something about each, namely their weights and the fact that the weight of one is less than the weight of the other. Examples are readily multiplied. Extensional relations between particulars typically hold in virtue of something about the particular. It is, nonetheless, that we will grant that when two events are causally related, they are so related in virtue of something about each.

Invoking the distinction between types and tokens, and using the term ‘physical’, rather than the more specific term ‘physiological’. Of the following are two broad distinctions of epiphenomenalism:

Token Epiphenomenalism: Mental events cannot cause anything.

Type Epiphenomenalism: No event can cause anything in virtue of

falling under a mental type.

So in saying. That property epiphenomenalism is the thesis that no event can cause anything in virtue of having a mental property. The conjunction of token epiphenomenalism and the claim those physical events cause mental events is, that, of course, the traditional doctrine of epiphenomenalism, as characterized earlier. Ton epiphenomenalism implies type epiphenomenalism, for if an event could cause something in virtue of falling under a mental type, then an event could be both epiphenomenalism would be false. Thus, if mental events cannot be causes, then events cannot be causes in virtue of falling under mental types. The denial of token epiphenomenalism does not, however, imply the denial of type epiphenomenalism, if a mental event can be a physical event that has causal effects. For, if so, then token epiphenomenalism may still be true. For it may be that events cannot be causes in virtue of falling under mental types. Mental events may be causes in virtue of falling under mental types. Thus, even if token epiphenomenalism is false, the question remains whether type epiphenomenalism is.

Suppose, for the sake of argument, that type epiphenomenalism is true. Why would that be a concern if mental events are physical events with causal effects? In our assumption that the causal relation is extensional, it could be true, consist with type epiphenomenalism, that pains cause winces, that desires cause behaviour, that perceptual experience cause beliefs and mental states cause memories, and that reasoning processes are causal processes. Nevertheless, while perhaps not as disturbing a doctrine as token epiphenomenalism, type epiphenomenalism can, upon reflection, seen disturbing enough.

Notice to begin with that ‘in virtue of’ expresses an explanatory relationship. In so doing, that ‘in virtue of’ is arguably a near synonym of the more common locution ‘because of’. But, in any case, the following seems true so as to be adequate: An event causes a G-event in virtue of being an F-event if and only if it causes a G-event because of being an F-event.’In virtue of’ implies ‘because of’, and in the case in question at least the implication seems to go in the other direction as well. Suffice it to note that were type epiphenomenalism consistent with its being the case that an event could have a certain effect because of falling under a certain mental type, then we would, indeed be owed an explanation of why it should be of any concern if type epiphenomenalism is true. We will, however, assume that type epiphenomenalism is inconsistent with that. We will assume that type epiphenomenalism could be reformulated as: No event can cause anything because of falling under a mental type. (And we will assume that property epiphenomenalism can be reformulated thus: No event can cause anything because of having a mental property.) To say that ‘a’ causes ‘b’ in virtue of being ‘F’ is too say that ‘a’ causes ‘b’ because of being ‘F’; that is, it is to say that it is because ‘a’ is ‘F’ that it causes ‘b’. So, understood, type epiphenomenalism is a disturbing doctrine indeed.

If type epiphenomenalism is true, then it could never be the case that circumstances are such that it is because some event or states is a sharp pain, or a desire to flee, or a belief that danger is near, that it has a certain sort of effect. It could never be the case that it is because some state in a desire of ‘X’ (impress someone) and another is a belief that one can ‘X’ by doing ‘Y’ (standing on one’s head) that the states jointly result in one’s doing ‘Y’ (standing on one’s head). If type (property) epiphenomenalism is true, then nothing has any causal powers whatever in virtue of (because of) being an instance of a mental type, then, never be the case of a certain mental type that a state has the causal power in certain circumstances to provide some effect. For example, it could never the case that it is in virtue of being an urge to scratch (or a belief that danger is near) that a state has the causal power in certain circumstances to produce scratching behaviour (or fleeing behaviour) if type-epiphenomenalism is true, then the mental qua mental, so to speak, is casually impotent. That may very well seem disturbing enough.

What reason is there, however, for holding type epiphenomenalism? Even if neurophysiology does not need to postulate types of mental events, perhaps the science of psychology does. Note that physics has no need to postulate types of neurophysiological events: But that may well not lead one tp doubt that an event can have effects in virtue of being (say) a neuron firing. Moreover, mental types figure in our everyday casual explanations of behaviour, intentional action, memory, and reasoning. What reason is there, then, for holding that events cannot have effects in virtue of being instances of mental types? This question naturally leads to the more general question of which event types are such that events have effects in virtue of falling under them. This more general question is best addressed after considering a ‘no gap’ line of argument that has emerged in recent years.

Current physics includes quantum mechanics, a theory which appears able, in principle, to explain how chemical processes unfold in terms of the mechanics of sub-atomic particles. Molecular biology seems able, in principle, to explain how the physiological operations of systems in living things in terms of biochemical pathways, long chains of chemical reactions. On the evidence, biological organisms are complex physical objects, made up of molecular particles (there are noo entelechies or élan vital). Since we are all biological organisms, the movements of our bodies and of their minute parts, including the chemicals in our brains, and so forth, are causally determined, too whatsoever subatomic particles and fields. Such considerations have inspired a lin e of argument that only events within the domain of physics are causes.

Before presenting the argument, let us make some terminological stipulations: Let us henceforth use ‘physical events’ (states) and ’physical property’ in as strict and narrow sense to mean, respectfully, a type of event (state) physics (or, by some improved version of current physics). Event if they figure in laws of physics. Finally, by ‘a physical event (states) we will mean an even (state) that falls under a physical type. Only events within the domain of (current) physics (or, some improved eversion of current physics) count as physical in this strict and narrow sense.

Consider, then:

The Token-Exclusion Thesis Only physical events can have

causal effects (i.e., as a matter of causal necessity, only physical

events have casual effects).

The premises of the basis argument for the token-exclusion thesis are:

Physical Caudal Closure Only physical events can cause

physical events.

Causation by way of Physical Effects As a matter of at least

casual necessity, an event is a cause of another event if and only if it

is a cause of some physical event?

These principles jointly imply the exclusion thesis. The principle of causation through physical effects is supported on the empirical grounds that every event occurs within space-time, and by the principle that an event is a cause of an event that occurs within a given region of space-time if and only if it is a cause of some physical event that occurs within that region of space-time. The following claim is offered in support of physical closure:

Physical causal Determination, For any (caused) physical

event, ‘P’, there is a chain of entirely physical events leading to ‘P’,

each link of which casually determines its successor.

(A qualification: If strict determinism is not true, then each link will determine the objective probability of its successor.) Physics is such that there is compelling empirical reason to believe that physical causal determination holds. Every physical event will have a sufficient physical cause. More precisely, there will be a deterministic casual chan of physical events leading to any physical event, ‘P’. Butt such links there will be, and such physical causal chains are entirely ‘gap-less’. Now, to be sure, physical casual determination does not imply physical causal closure, the former, but not the latter, is consistent with non-physical events causing physical events. However, a standard epiphenomenalist response to this is that such non-physical events would be, without exception, over-determining causes of physical events, and it is ad hoc are over-determining non-physical events. Nonetheless, a standard epiphenomenalist response of this is that such non-physical events would be, without exception, over-determining causes of physical events, and it is ad hoc to maintain that non-physical events are over-determining causes of physical events.

Are mental events within the domain of physics? Perhaps, like objects, events can fall under many different types or kinds. We noted earlier that a given object might, for instance, be both a stone and a paper wight, however, we understand how a stone could be a paper-wight, but how, for instance could an event of subatomic particles and fields be a mental event? Suffice e it to note for a moment that if mental events are not within the domain of physics, then if the token-exclusion thesis is true, no mental event can ever cause anything: Token epiphenomenalism is true.

One might reject the token-exclusion thesis, however, on the grounds that, typical events within the domains of the special sciences - chemistry, the life sciences, and so on - are not within the domain of physics, but nevertheless have causal effects. One might maintain that neuron firing, for instance, causes either neuron firing, even though neurophysiological events are not within the domain of physics. Rejecting the token-exclusion either, however, requires arguing either that physical causal closure is false or that the principle of causation by way of physical effects is.

But one response to the ‘no-gap’ argument from physics is to reject physical casual closure. Recall that physical causal determination is consistent with non-physical events being over-determining causes of physical events. One might concede that it would be ad hoc to maintain that a non-physical event, ‘N’, is an over-determining cause of a physical event ‘P’, and that ‘N’ causes ‘P’ in a way that is independent of the causation of ‘P’ by other physical events. Nonetheless, ‘N’ can be a cause of another event, that ‘N’ can cause a physical event ‘P’ in a way that is dependent upon P’s being caused by physical events. Again, one might argue that physical events ‘underlie’ non-physical events, and that a non-physical event ‘N’ can be a cause of anther event ‘X’ (physical or non-physical), in virtue of the physical event that ‘underlie’ ‘N’ being a cause of ‘X’.

Another response is to deny the principle of causation through physical effects. Physical causal closure is consistent with non-physical events. One might concede physical causal closure but deny the principle of causation by way of physical effects, and argue that non-physical events cause other non-physical events without causing physical events. This would not require denying that (1) Physical events invariably ‘underlie’ non-physical events or that (2) Whenever a non-physical event causes another non-physical event, some physical event that underlies the first event causes a physical event that underlies the second. Clams of both tenets (1) and (2) do not imply the principle of causation through physical effects. Moreover, from the fac t that a physical event ‘P’, causes another physical event ‘P*’. It may not allow that ‘P’ causes every non-physical event that ‘P*’ underlies. That may not follow it the physical events that underlie non-physical events casually suffice for those non-physical events. It would follow from that, that for every non-physical event, there is a causally sufficient physical event. But it may be denied that causal sufficiency suffices for causation: It may be argued that there are further constraints on causation that can fail to be met by an event that causally suffices for another. Moreover, it ma be argued that given the further constraints, non-physical events are the causes of non-physical events.

However, the most common response to the ‘no-gap’ argument from physics is to concede it, ad thus to embrace its conclusion, the token-exclusion these, but to maintain the doctrine of ‘token physicalism’, the doctrine that every event (state) is within the domain of physics. If special science events and mental events are within the domain of physics, then they can be causes consistent with the token-exclusion thesis.

Now whether special science events and mental events are within the domain of physics depends, in part, on the nature of events, and that is a highly controversial topic about which there is nothing approaching a received view. The topic raises deep issues that are beyond the scope of this essay, yet the issues concerning the ‘essence’ of events and the relationship between causation and causal explanation, are in any case, . . . suffice it to note here that it is believed that the sme fundamental issues concerning the causal efficacy of the mental arise for all the leading theories of the ‘relata’ of casual relation. The issues just ‘pop-up’ in different places. However, that cannot be argued at this time, and it will not be for us to be assumed.

Since the token physicalism response to the no-gap argument from physics is the most popular response, is that special science events, and even mental events, are within the domain of physics. Of course, if mental events are within the domain of physics then, token epiphenomenalism can be false even if the token-exclusion is true: For mental events may be physical events which have causal effects.

Nevertheless, concerns about the causal relevance of mental properties and event types would remain. Indeed, token physicalism together with a fairly uncontroversial assumption, naturally leads to the question of whether events can be causes only in virtue of falling under types postulated by physics. The assumption is that physics postulates a system of event types that has the following features:

Physical Causal Comprehensiveness: When two physical

events are causally related, they are so related in virtue of falling

under physical types.

That thesis naturally invites the question of whether the following is true:

The Type-Exclusion Thesis: An event can cause something

only in virtue of falling under a physical type, i.e., a type

postulated by physics.

The type-exclusion thesis offers one would-be answer to our earlier question of which effects types are such that events have effects in virtue of falling under them. If the answer is the correct one, it may, however, be in fact (if it is correct) that special science events and mental events are within the domain of physics will be cold comfort. For type physicalism, the thesis that every event type is a physical type, seems false. Mental types seem not to be physical types in our strict and narrow sense. No mental type, it seems, is necessarily coextensive (i.e., coextensive in every ‘possible world’) with any type postulated by physics. Given that, and given the type-exclusion thesis, type epiphenomenalism is true. However, typical special science types also fail to be necessarily coextensive with any physical types, and thus typical special science types fail to be physical types. Indeed, we individuate the sciences in part by the event (state) types they postulate. Given that typical special science types are not physical types (in our strict sense), then typical special science types are not such that even can have causal effects in virtue of falling under them.

Besides, a neuron firing is not a type of event postulated by physics, given the type exclusion thesis, no event could ever have any causal effects in virtue of being a firing of a causal effect. The neurophysiological qua neurophysiological is causally impotent. Moreover, if things have casual powers only in virtue of their physical properties, then an HIV virus, qua HIV virus, does not have the causal power to contribute to depressing the immune system: For being an HIV virus is not a physical property (in our strict sense). Similarly, for the same reason the SALK vaccine, qua SALK vaccine, would not have the causal power to contribute to producing an immunity to polio. Furthermore, if, as it seems, phenotype properties are not physical properties, phenotypic properties do not endow organisms with casual powers conducive to survival. Having hands, for instance, could never endow nothing with casual powers conducive to survival since it could never endow anything with any causal powers whatsoever. But how, then, could phenotypic properties be units of natural selection? And if, as it seems, genotypes are not physical types, then, given the type exclusion thesis, genes do not have the causal power, qua genotypes, to transmit the genetic bases for phenotypes. How, then, could the role of genotypes as units of heredity be a causal role? There seem to be ample grounds for scepticism that any reason for holding the type-exclusion thesis could outweigh our reasons for rejecting it.

We noted that the thesis of universal physical causal comprehensiveness or ‘upc-comprehensiveness’ for short, invites the question of whether the type-exclusion thesis is true. But does upc-comprehensiveness while rejecting the type-exclusion thesis?

Notice that there is a crucial one-word difference between the two theses: The exclusion thesis contains the word ‘only’ in front of ‘in virtue of’, while thesis of upc-comprehensiveness does not. This difference is relevant because ‘in virtue of’ does not imply ‘only in virtue of’, I am a brother in virtue of being a male with a sister, but I am also a brother in virtue of being a male with a brother, and, of course, being a male with a brother, and conversely. Likewise, I live in the province of Ontario in virtue of living in the city of Toronto, but it is also true that I live in Canada in virtue of living in the County of York. Moreover, in the general case, if something ‘x’ bears a relation ‘R’, to something ‘y’ in virtue of x’s being ‘F’ and y’s being ‘G’. Suppose that ‘x’ weighs less than ‘y’ in virtue of x’s weighing lbs., and y’s weighing lbs. Then, it is also true that ‘x’ weighs less than ‘y’ in virtue of x’s weighing under lbs., and y’s weighing over lbs. And something can, of course, weigh under lbs., without weighing lbs. To repeat, ‘in virtue of’ does not imply ‘only in virtue of’.

Why, then, think that upc-comprehensiveness implies the type-exclusion thesis? The fact that two events are causally related in virtue of falling under physical types does not seem to exclude the possibility that they are also causally related in virtue of falling under non-physical types, in virtue of the one being (say) a firing of a certain other neuron, or in virtue of one being a secretion of enzymes and the other being a breakdown of amino acids. Notice that the thesis of upc-comprehensiveness implies that whenever an event is an effect of another, it is so in virtue of falling under a physical type. But the thesis does not seem to imply that whenever an event vis an effect of another, it is so only in virtue of falling under a physical type. Upc-comprehensiveness seems consistent with events being effects in virtue of falling under non-physical types. Similarly, the thesis seems consistent with events being causes in virtue of falling under non-physical types.

Nevertheless, an explanation is called for how events could be causes in virtue of falling under non-physical types if upc-comprehensiveness is true. The most common strategy for offering such an explanation involves maintaining there is a dependence-determination relationship between non-physical types and physical types. Upc-comprehensiveness, together with the claim that instances of non-physical event types are causes or effects, implies that, as a matter of causal necessity, whenever an event falls under a non-physical event type, if falls under some physical type or other. The instantiation of non-physical types by an event thus depends, as a matter of causal necessity, on the instantiation of some or other physical event type by the event. It is held that non-physical types in physical context: Although as given non-physical type might be ‘realizable’ by more than one physical type. The occurrence o a physical type in a physical context in some sense determines the occurrence of any non-physical type that it ‘realizes’.

Recall the considerations that inspired the ‘no gap’ arguments from physics: Quantum mechanics seems able, in principle, to explain how chemical processes unfold in terms of the mechanics of subatomic particles: Molecular biology seems able, in principle, to explain how the physiological operations of systems in living things occur in terms of biochemical pathways, long chains of chemical reactions. Types of subatomic causal processes ‘implement’ types of chemical processes. Many in the cognitive science community hold that computational processes implement that mental processes, and that computational processes are implemented, in turn, by neurophysiological processes.

The Oxford English Dictionary gives the everyday meaning of ‘cognition’ as ‘the action or faculty of knowing’. The philosophical meaning is the same, but with the qualification that it is to be ‘taken in its widest sense, including sensation, perception, conception, and volition’. Given the historical link between psychology and philosophy, it is not surprising that ‘cognitive’ in ‘cognitive psychology’ has something like this broader sense, than the everyday one. Nevertheless, the semantics of ‘cognitive psychology’, like that of many adjective-noun combinations, is not entirely transparent. Cognitive psychology is a branch of psychology, and its subject matter approximates to the psychological study that are largely historical, its scope is not exactly what one would predict.

Many cognitive psychologists have little interest in philosophical issues, as cognitive scientists are, in general, more receptive. Fodor, because of his early involvement in sentence processing research, is taken seriously by many psycholinguistics. His modularity thesis is directly relevant to questions about the interplay of different types of knowledge in language understanding. His innateness hypothesis, however, is generally regarded as unhelpful, and his prescription that cognitive psychology is primarily ignored. Dennett’s recent work on consciousness treats a topic that is highly controversial, but his detailed discussion of psychological research findings has enhanced his credibility among psychologists. Overall, psychologists are happy to get on with their work without philosophers telling them about their ‘mistakes’.

The hypotheses driving most of modern cognitive science is simple to state - the mind is a computer. What are the consequences for the philosophy of mind? This question acquires heightened interest and complexity from new forms of computation employed in recent cognitive theory.

Cognitive science has traditionally been based on or upon symbolic computation systems: Systems of rules for manipulating structures built up of tokens of different symbol types. (This classical kind of computation is a direct outgrowth of mathematical logic.) Since the mid-1980s, however, cognitive theory has increasingly employed connectionist computation: The spread of numerical activation across units - the view that one of the most impressive and plausible ways of modelling cognitive processes in by means of a connectionist, or parallel distributed processing computer architecture. In such a system data is input into a number of cells as one level, or hidden units, which in turn delivers an output.

Such a system can be ‘trained’ by adjusting the weights a hidden unit accords to each signal from an earlier cell. The’ training’ is accomplished by ‘back propagation of error’, meaning that if the output is incorrect the network makers the minimum adjustment necessary to correct it. Such systems prove capable of producing differentiated responses of great subtly. For example, a system may be able to task as input written English, and deliver as output phonetically accurate speech. Proponents of the approach also, point pout that networks have a certain resemblance to the layers of cells that make up a human brain, and that like us. But unlike conventional computing programs, networks degrade gracefully, in the sense that with local damage they go blurry rather than crashed altogether. Controversy has concerned the extent to which the differentiated responses made by networks deserve to be called recognitions, and the extent to which non-recognizable cognitive function, including linguistic and computational ones, are well approached in these terms.

Some terminology will prove useful: that is, for which we are to stipulate that an event type ‘T’ is a casual type if and only if there is, at least one type T*, such that something can case a T* in virtue of being a ‘T’. And by saying that an event type is realizable by physical event types or physical properties. For that of which is least causally possible for the event to be realized by a physical event type. Given that non-physical causal types must be realizable by physical types, and given that mental types are non-physical types, there are two ways that mental types might to be causal. First, mental types may fail to be realizable by physical types. Second, mental types might be realizable by physical types but fail to meet some further condition for being causal types. Reasons of both sorts can be found in the literature on mental causation for denting that any mental types are causal. However, there has been much attention paid to reasons for the first sort in this casse of phenomenal mental types (pain states, visual states, and so forth). And there has been much attention to reasons of the second sort in the case of intentional mental states (i.e., beliefs that P, desires that Q, intentions that R, and so on).

Notice that intentional states figure in explanations of intentional actions not in virtue of their intentional mode (whether they are beliefs or desires, and so on) but also in virtue of their contents, i.e., what is believed, or desired, and so forth. For example, what causally explains someone’s doing ‘A’ (standing on his head) is that the person wants to ‘X’ (impress someone) and believes that by doing ‘A’ he will ‘X’. The contents of the belief and desire (what is believed and what is desired) sem essential to the causal explanation of the agent’s doing ‘A’. Similarly, we often causally explain why someone came to believe that ‘P’ by citing the fact that the individual came to believe that ‘Q’ and inferred ‘P’ from ‘Q’. In such cases, the contents of the states in question are essential to the explanation. This is not, of course, to say that contents themselves are causally efficacious, contents are not among the relata of causal relations. The point is, however, that we characterize states when giving such explanations not only as being as having intentional modes, but also as having certain contents: We type states for having certain contents, we type states for the purpose of such explanations in terms of their intentional modes and their contents. We might call intentional state types that might include content properties ‘conceptual intentional state types’, but to avoid prolixity, let us call them ‘intentional state types’ for short: Thus, for present purposes, b y ‘intentional state types’ we will mean types such as the belief that ‘P; the desire that ‘Q’, and so on, and not types such as belief, desire and the like, and not types such as belief, desire, and so forth.

Although it was no part of American philosopher Hilary Putnam, who in 1981 marked a departure from scientific realism in favour of a subtle position that he called internal realism, initially related to an ideal limit theory of truth and apparently maintaining affinities with verification, but in subsequent work more closely aligned with ‘minimalism’, Putnam’s concepts in the later period has largely to deny any serious asymmetry between truth and knowledge as it is obtained in natural science, and as it is obtained in morals and even theology. Still, purposively of raising concerns about whether ideational states are causal, the well-known ‘twin earth’ thought experiment have prompted such concerns. These thought-experiments are fairly widely held to show alike in every intrinsic physical respect can have intentional states with different contents. If they show that, then intentional state type fail to supervene on intrinsic physical state types. The reason is that with contents an individual’s beliefs, desires, and the like, have, depends, in part, on extrinsic, contextual factors. Given that, the concern has been raised toast states cannot have effects in virtue of falling under intentional state types.

One concern seems to be that state cannot have effects in virtue of falling under intentional state types because individuals who are in all and only the same intrinsic states must have all and only the same causal powers. In response to that concern, it might be pointed out that causal power ss often depend on context. Consider weight. The weight of objects do not supervene on their intrinsic properties: Two objects can be exactly alike in every intrinsic respect (and thus have the same mass) yet have different weights. Weight depends, in part on extrinsic, contextual factors. Nonetheless, it seems true that an object can make a scale read 10lbs in virtue of weighing 10lbs. Thus, objects which are in exactly the am e type of intrinsic states may have different causal powers due to differences in their circumstances.

It should be noted , however, that on some leading ‘externalist’ theories of content, content, unlike weight, depends on a historical context, such as a certain set of content-involving states is for attribution of those states to make the subject as rationally intelligible as possible, in the circumstances. Call such as theory of content ‘historical-externalist theories’. On one leading historical-externalist theory, the content of a state depends on the learning history of the individual on another. It depends on the selection history of the species of which the individual is a member. Historical-externalist theories prompt a concern that states cannot have causal effects in virtue of falling under intentional state types. Causal state types, it might be claimed, are never such that their tokens must have a certain causal ancestry. But, if so, then, if the right account of content is a historical-externalist account, then intentional types are not casual types. Some historical-externalists appear to concede this line of argument, and thus to effects in virtue of falling under intentional state types. However, explain how intentional-externalists attempt to explain how intentional types can be casual, even though their tokens must have appropriated causal ancestries. This issue is hotly debated, and remains unresolved.

Finally, by noting, why it is controversial, whether phenomenal state types can be realized by physical state types. Phenomenal state types are such that it is like something for a subject to be in them: It is, for instance, like something to have a throbbing pain. It has been argued that phenomenal state types are, for that reason, subjective to fully understand what it is to be in them. One must be able to take up is to be in them, one must be able to take up a certain experiential point of view. For, it is claimed, an essential aspect of what it is to be in a phenomenal state is what it is like to be in a phenomenal state is what it is like to be in the state, only by tasking up certain experiential point of view can one understand that aspect (in our strict and narrow sense) are paradigms’ objective state, i.e., non-subjective states. The issue arises, then, as to whether phenomenal state types can be realized by physicalate types. How could an objective state realize a subjective one? This issue too is hotly debated, and remains unresolved. Suffice it to say, that only physical types and types realizable by physical types and types realizable by physical types are causal, and if phenomenal types are neither, then nothing can have any causal effects, so, then, in virtue of falling under a phenomenal type. Thus, it could never be the case, for an example, is that a state causally results in a bad mood by virtue in being subjected to a throbbing pain.

If the universe is as classical (Newtonian) physics describes it, it functions automatically. We need not wonder about our 'place, because we do not havbe any. The feeling that wee have control over anything, even our bodies, is an illusion. Our bodies are made of atoms and every movement of every atom is fully determined by the vconfigurations of the atoms in the universe in the distant past.

That is to say, that in order to find our place in the universe we need to listen to our own deepest yearning and examine the moments of our most complete fulfillments. And, althouigh the nature of our deeperst yearning may not be known to us, there is plenty of evidence that moments of brteakthrough into the noumenal dimension, such as the moment of Heisenberg's dfiscovery, bring weith them a sense of total fulfillment. It is natural to suggest, therefore, that our function in the universe is to bring about a relationship b etween the phenomenal and the noumenal worlds.

The feeling of strangeness is slight, but to the fact that our thinking has been formed in the materialistic paradigm that does not recognize the idea of levels of being. Within this paradigm the idea we have just proposed concerning our plac in the universe makes no sense. If we open our minds, however, to a world-view thast is based on the aliveness and multileveled nature of the univ erse, we may assuage our feelings of strangeness and find a solution to the puzzle.

Bringing to note that the tasks of participation in the governance of the universe, and of the bovernance of one's own functioning, call for contemplation, rather than reasoning and action. the task of bringing order to the universe cannot be accomplished througfh the relatively inferior faculty of discursive reasoning, which figures things out in a sequential order. It calls, rathr, being in touch with aspec ts of the noumenal world, such as the Form of Order itself. in that in this context is telling St. Augustine, who weasa influenced by Plotinus, speaks, in his "Confessions", of the experience of the noumenal versus the phenomenal as the experience of eternity/versus the experience of time. He makes the point, evidently on the basis of his experience, that our ordinary modes of thinking cannot approach the understanding of eternity. When people try, he says, Their thoughts still turn and twist upon the ebb and flow of things in past and future time. But if only their minds could be seized and held steady, they would be still for a while and, for that short moment, they would glimpse the splendor of eternity which is forever still. They would contrast it with time, which is never still, and see that it is not comparable . . . If only men's minds could be seized and held still, they would see how eternity, in which there is neither past nor future, determines both past and future.

Ideas are that with which we think, or in Locke’s terms, whatever the mind may be employed about in thinking. Looked at that way, they seem to be inherently transient, fleeting, and unstable private presences. Ideas provide the way in which objective knowledge can be expressed. They are the essential components of understanding, and any intelligible proposition that is true must be capable of being understood. Plato’s theory of ‘forms’ is a launching celebration of the objective and timeless existence of ideas as concepts, and reified to the point where they make up the only real world, of separate and perfect models of which the empirical world is only a poor cousin. This doctrine, notably in the ‘Timaeus’, opened the way for the Neoplatonic notion of ideas as the thoughts of God. The concept gradually lost this other-worldly aspect, until after Descartes ideas become assimilated to whatever it is that lies in the mind of any thinking being.

Together with a general bias toward the sensory, so that what lies in the mind may be thought of as something like images, and a belief that thinking is well explained as the manipulation having no real existence but existing in a fancied imagination. It is not reason but ‘the imagination’ that is found to be responsible for our making the empirical inferences that we do. There are certain general ‘principles of the imagination’ according to which ideas naturally come and go in the mind under certain conditions. It is the task of the ‘science of human nature’ to discover such principles, but without itself going beyond experience. For example, an observed correlation between things of two kinds can be seen to produce in everyone a propensity to expect a thing to the second sort given an experience of a thing of the first sort. We get a feeling, or an ‘impression’, when the mind makes such a transition and that is what leads us to attribute necessarily to the reflation between things of the two kinds, there is no necessity in the relations between things that happen in the world, but, given our experience and the way our minds naturally work, we cannot help thinking that there is.

A similar appeal to certain ‘principles of the imagination’ is what explains our belief in a world of enduring objects. Experience alone cannot produce that belief, everything we directly perceive is ‘momentary and fleeting’. And whatever our experience is like, no reasoning could assure us of the existence of something independent of our impressions which continues to exist when they cease. The series of our constantly changing sense impressions presents us with observable features which Hume calls ‘constancy ‘ and ‘coherence’, and these naturally operate on the mind in such a way as eventually to produce ‘the opinion of a continued and distinct existence’. The explanation is complicated, but it is meant to appeal only to psychological mechanisms which can be discovered by ‘careful and exact experiments, and the observation of those particular effects, which result from [the mind’s] different circumstances and situations’.



We believe not only in bodies, but also in persons, or selves, which continue to exist through time, and this belief too can be explained only by the operation of certain ‘principles of the imagination’. We never directly perceive anything we can call ourselves: The most we can be aware of in ourselves are our constantly changing momentary perceptions, not the mind or self which has them. For Hume, there is nothing that really binds the different perceptions together, we are led into the ‘fiction’ that they form a unity only because of the way in which the thought of such series of perceptions works upon the mind. ‘The mind is a kind of theatre, where several perceptions successively make their appearance, . . . there is properly no simplicity in it at one time, nor identity in different: Whatever natural propensity we may have to imagine that simplicity and identity. The comparison of the theatre must not mislead us. They are the successive perceptions only, that constitutes the mind.

Leibniz’s held, in opposition to Descartes, that adult humans can have experiences of which they are unaware: Experiences of which effect what they do, but which are not brought to self-consciousness. Yet there are creatures, such as animals and babies, which completely lack the ability to reflect of their experiences, and to become aware of them as experiences of theirs. The unity of a subject’s experience, which stems from his capacity to recognize all his experience as his, was dubbed by Kant ‘the transcendental unity of apperception ~ Leibniz’s term for inner awareness or self-consciousness. In contrast with ‘perception’ or outer awareness ~. However, this apprehension of unity is transcendental, than empirical, because it is presupposed in experience and cannot be derived from it. Kant used the need for this unity as the basis of his attempted refutation of scepticism about the external world. He argued that my experiences could only be united in one self-consciousness if at least some of them were experiences of a law-governed world of objects in space. Outer experience is thus a necessary condition of inner awareness.

Here we seem to have a clear case of ‘introspection’, derived from the Latin ‘intro’ (within) + ‘specere’ (to look), introspection is the attention the mind gives to itself or to its own operations and occurrences. I can know there is a fat hairy spider in my bath by looking there and seeing it. But how do I know that I am seeing it rather than smelling it, or that my attitude to it is one of disgust than delight? One answer is: By a subsequent introspective act of ‘looking within’ and attending to the psychological state ~ my seeing the spider. Introspection, therefore, is a mental occurrence, which has as its object some other psychological state like perceiving, desiring, willing, feeling, etc. In being a distinct awareness-episode it is different from a more general ‘self-consciousness’ which characterizes all or some of our mental history.

The awareness generated by an introspective act can have varying degrees of complexity. It might be a simple knowledge of (mental) things’ ~ such as a particular perception-episode, or it might be the more complex knowledge of truths about one’s own mind. In this latter full-blown judgement form, introspection is usually the self-ascription of psychological properties and, when linguistically expressed, results in statements like ‘I am watching the spider’ or ‘I am repulsed’.

In psychology this deliberate inward look becomes a scientific method when it is ‘directed toward answering questions of theoretical importance for the advancement of our systematic knowledge of the laws and conditions of mental processes’. In philosophy, introspection (sometimes also called ‘reflection’) remains simply ‘that notice which mind takes of its own operations and has been used to serve the following important functions:

(1) Methodological: Thought experiments are a powerful tool in philosophical investigation. The Ontological Argument, for example, asks us to try to think of the most perfect being as lacking existence and Berkeley’s Master Argument challenges us to conceive of an unseen tree, conceptual results are then drawn from our failure or success. From such experiments to work, we must not only have (or fail to have) the relevant conceptions but also know that we have (or fail to have) them ~ presumably by introspection.

(2) Metaphysical: A metaphysical of mind needs to take cognizance of introspection. One can argue for ‘ghostly’ mental entities for ‘qualia’, for ‘sense-data’ by claiming introspective awareness of them. First-person psychological reports can have special consequences for the nature of persons and personal identity: Hume, for example, was content to reject the notion of a soul-substance because he failed to find such a thing by ‘looking within’. Moreover, some philosophers argue for the existence of additional perspectival facts ~ the fact of ‘what it is like’ to be the person I am or to have an experience of such-and-such-a-kind. Introspection as our access to such facts becomes important when we construct a complete e metaphysics of the world.

(3) Epistemological: Surprisingly, the most important use made of introspection has been in an accounting for our knowledge of the outside world. According to a foundationalist theory of justification an empirical belief is either basic and ‘self-justifying’ or justified in relation to basic beliefs. Basic beliefs therefore, constitute the rock-bottom of all justification and knowledge. Now introspective awareness is said to have a unique epistemological status in it, we are said to achieve the best possibly epistemological position and consequently, introspective beliefs and thereby constitute the foundation of all justification.

Coherence is a major player in the theatre of knowledge. There are coherence theories of belief, truth and justification where these combine in various ways to yield theories of knowledge, coherence theories of belief are concerned with the content of beliefs. Consider a belief you now have, the belief that you are reading a page in a book. So what makes that belief the belief that it is? What makes it the belief that you are reading a page in a book than the belief that you have something other of a preoccupation? The same stimuli may produce various beliefs and various beliefs may produce the same action. The role that gives the belief the content it has is the role it plays in a network of relations to other beliefs, the role in inference and implication, for example, I infer different things from believing that I am reading a page in a book than from any other belief, just as I infer that belief from different things than I refer other beliefs from.

The input of perception and the output of an action supplement the central role of the systematic relations the belief has to other beliefs, but it is the systematic relations that give the belief the special content it has. They are the fundamental source of the content of beliefs. That is how coherence comes to be. A belief that the content that it does because of the away in which it coheres within the system of beliefs, however, weak coherence theories affirm that coherence is one determinant of the content of belief as strong coherence theories on the content of belief affirm that coherence is the sole determinant of the content of belief.

Nonetheless, the concept of the given-referential immediacy as apprehended of the contents of sense experience is expressed in the first person, and present tense reports of appearances. Apprehension of the given is seen as immediate both in a causal sense, since it lacks the usual causal chain involved in perceiving real qualities of physical objects, and in an epistemic sense, since judgements expressing it are justified independently of all other beliefs and evidence. Some proponents of the idea of the ‘given’ maintain that its apprehension is absolutely certain: Infallible, incorrigible and indubitable. It has been claimed also that a subject is omniscient with regard to the given ~ if a property appears, then the subject knows this.

Without some independent indication that some of the beliefs within a coherent system are true, coherence in itself is no indication of truth. Fairy stories can cohere, however, our criteria for justification must indicate to us the probable truth of our beliefs. Hence, within any system of beliefs there must be some privileged class with which others must cohere to be justified. In the case of empirical knowledge, such privileged beliefs must represent the point of contact between subject and world: They must originate in perception that when challenged, however, we justify our ordinary perceptual beliefs about physical properties by appeal to beliefs about appearances. The latter seem more suitable as foundational, since there is no class of more certain perceptual beliefs to which we appeal for their justification.

The argument that foundations must be certain was offered by Lewis (1946). He held that no proposition can be probable unless some are certain. If the probability of all propositions or beliefs were relative to evidence expressed in others, and if these relations were linear, then any regress would apparently have to terminate in propositions or beliefs that are certain. But Lewis shows neither that such relations must be linear nor that redresses cannot terminate in beliefs that are merely probable or justified in themselves without being certain or infallible.

Arguments against the idea of the given originate with Kant (1724-1804), who argues that percepts without concepts do not yet constitute any form of knowing. Being non-epistemic, they presumably cannot serve as epistemic foundations. Once we recognize that we must apply concepts of properties to appearances and formulate beliefs utilizing those concepts before the appearances can play any epistemic role, it becomes more plausible that such beliefs are fallible. The argument was developed by Wilfrid Sellars (1963), which according to him, the idea of the given involves a confusion between sensing particulars (having sense impressions), which is non-epistemic, and having non-inferential knowledge of propositions referring to appearances. The former may be necessary for acquiring perceptual knowledge, but it is not itself a primitive kind of knowing. Its being non-epistemic renders it immune from error, but also unsuitable for epistemological foundations. The latter, non-referential perceptual knowledge, are fallible, requiring concepts acquired through trained responses to public physical objects.

Contemporary foundationalist’s deny the coherentist’s claim whole eschewing the claim that foundations, in the form of reports about appearances, are infallible. They seek alternatives to the given as foundations. Although arguments against infallibility are sound, other objections to the idea of foundations are not. That concepts of objective properties are learned prior to concepts of appearances, for example, implied neither that claims about appearances are less certain than claims about objective properties, nor that the latter are prior in chains of justification. That there can be no knowledge prior to the acquisition and consistent application of concepts allows for propositions whose truth requires only consistent applications of concepts, and this may be so for some claims about appearances, least of mention, coherentists would add that such genuine belief’s stands in need of justification themselves and so cannot be foundations.

Coherentists will claim that a subject requires evidence that he applies concepts consistently that he is able, for example, consistently to distinguish red from other colours that appear. Beliefs about red appearances could not then be justified independently of other beliefs expressing that evidence. To say that part of the doctrine of the given that holds beliefs about appearances to be self-justified, we require an account of how such justification is possible, how some beliefs about appearances can be justified without appeal to evidence. Some foundationalists simply assert such warrant as derived from experience, but, unlike appeals to certainty by proponents of the given.

It is, nonetheless, an explanation of this capacity that enables its developments as an epistemological corollary to a metaphysical dualism. The world of ‘matter’ is known through external/outer sense-perception. So cognitive access to ‘mind’ must be based on a parallel process of introspection which ‘thought . . . not ‘sense’, as having nothing to do with external objects: Yet [put] is very like it, and might properly enough be called ‘internal sense’. However, having mind as object, is not sufficient to make a way of knowing ‘inner’ in the relevant sense be because mental facts can be grasped through sources other than introspection. To point, is rather that an ‘inner perception’ provides a kind of access to the mental not obtained otherwise ~ it is a ‘look within from within’. Stripped of metaphor this indicates the following epistemological features:

1. Only I can introspect my mind.

2. I can introspect only my mind.

3. Introspective awareness is superior to any other knowledge of contingent facts that I or others might have.

(1) and (2) are grounded in the Cartesian of ‘privacy’ of the mental. Normally, a single object can be perceptually or inferentially grasped by many subjects, just as the same subject can perceive and infer different things. The epistemic peculiarity of introspection is that, is, is exclusive ~ it gives knowledge only of the mental history of the subject introspecting.

The tenet (2) of the traditional theory is grounded in the Cartesian idea of ‘privileged access’. The epistemic superiority of introspection lies in its being and infallible source of knowledge. First-person psychological statements which are its typical results cannot be mistaken. This claim is sometimes supported by an ‘imaginability test’, e.g., the impossibility of imaging that I believe that I am in pain, while at the same time imaging evidence that I am not in pain. An apparent counter-example to this infallibility claim would be the introspective judgement ‘I am perceiving a dead friend’ when I am really hallucinating. This is taken to by reformulating such introspective reports as ‘I seem to be perceiving a dead friend’. The importance of such privileged access is that introspection becomes a way of knowing immune from the pitfalls of other sources of cognition. The basic asymmetry between first and third person psychological statements by introspective and non-introspective methods, but even dualists can account for introspective awareness in different ways:

(1) Non-perceptual models ~ Self-scrutiny need not be perceptual. My awareness of an object ‘O’ changes the status of ‘O’. It now acquires the property of ‘being an object of awareness’. On the basis of this or the fact that I am aware of ‘O’, such an ‘inferential model’ of awareness is suggested by the Bhatta Mimamsa school of Indian Epistemology. This view of introspection does not construe it as a direct awareness of mental operations but, interestingly, we will have occasion to refer to theories where the emphasis on directness itself leads to a non-perceptual, or at least, a non-observational account of introspection.

(2) Reflexive models ~ Epistemic access to our minds need not involve a separate attentive act. Part of the meaning of a conscious state is that I know in that state when I am in that state. Consciousness is here conceived as a ‘phosphorescence’ attached to some mental occurrence and in no need of a subsequent illustration to reveal itself. Of course, if introspection is defined as a distinct act then reflexive models are really accounts of the first-person access that makes no appeal to introspection.

(3) Public-mind theories and fallibility/infallibility models ~ the physicalists’ denial of metaphysically private mental facts naturally suggests that ‘looking within’ is not merely like perception but is perception. For Ryle (1900-76). Mental states are ‘iffy’ behavioural facts which, in principle, are equally accessible to everyone in the same was: One’s own self-awareness therefore is, in effect, no different in type from anyone else’s observations about one’s mind.

A more interesting move is for the physicalists to retain the truism that I grasp that I am sad in a very different way from that in which I know you to be sad. This directedness or non-inferential nature of self-knowledge can be preserved in some physicalists theories of introspection. For instance, Armstrong’s identification of mental states with causes of bodily behaviour and of the latter with brain states, makes introspection the process of acquiring information about such inner physical causes. But since introspection is itself a mental state, it is a process in the brain as well: And since its grasp of the relevant causal information is direct, it becomes a process in which the brain scans itself.

Alternatively, a broadly ‘functionalist’ view of mental states suggests of the machine-analogue of the introspective situation: A machine-table with the instruction ‘Print: ‘I am in state A’ when in state ‘A’ results in the output ‘I am in state A’ when state ‘A’ occurs. Similarly, if we define mental states and events functionally, we can say that introspection occurs when an occurrence of a mental state ‘M’ directly results in awareness of ‘M’. Observe with care that this way of emphasizing directness yields a non-perceptual and non-observational model of introspection. The machine in printing ‘I am in state A’ does so (when it is not making a ‘verbal mistake’) just because it is in state ‘A’. There is no computation of information or process of ascertaining involved. The latter, at best, consist simply in passing through a sequence of states.



Furthering towards the legitimate question: How do I know that I am seeing a spider? Was interpreted as a demand for the faculty or information-processing-mechanism whereby I come to acquire this knowledge? Peculiarities of first-person psychological awareness and reports were carried over as peculiarities of this mechanism. However, the question need not demand the search for a method of knowing but rather for an explanation of the special epistemic features of first-person psychological statements. In that, the problem of introspection (as a way of knowing) dissolves but the problem of explaining ‘introspective’ or first-person authority remains.

Traditionally, belief has been of epistemological interest in its propositional guise: ‘S’ believes that ‘p’, where ‘p’ is a proposition towards which an agent, ‘S’, exhibits an attitude of acceptance. Not all belief is of this sort. If I trust what you say, I believe you. And someone may believe in Mrs. Collins, or in a free-market economy, or in God. It is sometimes supposed that all beliefs are ‘reducible’ to propositional belief, belief-that. Thus, my believing you might be thought as matter of my believing, perhaps, that what you say is true, and your belief in free markets or in God, a matter of your believing that free-market economies are desirable or that God exists.

It is doubtful, however, that non-propositional believing can, in every case, be reduced in this way. Debated on this point has tended to focus on an apparent distinction between ‘belief-that’ and ‘belief-in’, and the application of this distinction to belief in God: St. Thomas Aquinas (1225-64), accepted or advanced as true or real on the basis of less than convincing evidence in supposing that to believe in God is simply to believe that certain truths hold, such that God exists, that he is benevolent, etc. Others ague that belief-in is a distinctive attitude, one that includes essentially an element of trust. More commonly, belief-in has been taken to involve a combination of propositional belief together with some further attitude.

H.H. Price (1969) defends the claim that there is different sorts of belief-in, some, but not all, reducible to beliefs-that. If you believe in God, you believe that God exists, that God is good, etc. But, according to Price, your belief involves, in addition, a certain complex pro-attitude toward its object. One might attempt to analyse this further attitude in terms of additional beliefs-that: ‘S’ believes in ‘χ’ exists (and perhaps holds further factual beliefs about ‘χ’) (2) ‘S’ believes that ‘χ; is good or valuable in some respect? ; and (3) ‘S’ believes that χ’s being good or valuable in this respect is it is a good thing. An analysis of this sort, however, fails adequately to capture the further affective component of belief-in. Thus, according to Price, if you believe in God, your belief is merely that certain truths hold: You possess, in addition, an attitude of commitment and trust towards God.

Notoriously, belief-in outruns the evidence for the corresponding belief-that. Does this diminish its rationality? If belief-in presupposes belief-that, it might be thought that the evidential standards for the former must be, at least, as, high as standards for the latter. And any additional pro-attitude might be thought to require further layers of justification not required for cases of belief-that.

Some philosophers have argued that, at least for cases in which belief-in is synonymous with faith (or, faith-in), evidential thresholds for constituent propositional beliefs are diminished. You may reasonably have faith in God or Mrs. Collins, even though beliefs about their respective attributes, were you to harbour them would be evidentially standard.

Belief-in may be, in general, less susceptible to alteration in the face of unfavourable evidence than belief-that. A believer who encounters evidence against God’s existence may remain unshaken in his belief, in part because the evidence does not bear in his pro-attitude. So long as this is united with his belief that God exists, the belief may survive epistemic buffeting ~ and reasonably so ~ in a way that an ordinary propositional belief that would not.

What is at stake here is the appropriateness of distinct types of explanation. That ever since the times of Aristotle (384-322Bc) philosophers have emphasized the importance of explanatory knowledge. In simplest terms, we want to know not only what is the case but also why it is. This consideration suggests that we define explanation as an answer to a why-question. Such a definition would, however, be too broad, because some why-questions are request for consolation (Why did my son have to die?) Or moral justification (Why should woman not be paid the same as men for the same work?) It would also be too narrow because some explanations are responses to how-questions (How does radar work?) Or how-possibly-questions (How is it possible for cats always to land on four feet?)

In its most general sense , ‘to explain’ means to make clear, to make plain, or to provide understanding. Definitions of this sort used philosophically un-helped, for the terms used in the definitions are no less problematic than the term to be defined. Moreover, since a wide variety of things require explanation, and since many different types of explanation exist, a more complex explanation is required. The term ‘explanandum’ is used to refer to that which is to be explained: The term ‘explanans’ refer to that which does the explaining. The explanams and the explanandum taken together constitute the explanation.

One common type of explanation occurs when deliberate human actions are explained in terms of conscionable purposes. ‘Why did you go to the pharmacy yesterday? ‘Because I had a headache and needed to get some aspirin’. It is tacitly assumed that aspirin is an appropriate medication for headaches and that going to the pharmacy would be an efficient way of getting some. Such explanations are, of course, teleological, referring, as they do to goals. The explanans are not the realisation of a future goal ~ if the pharmacy happened to be closed for stocktaking the aspirin would not have been obtained there, but that would not invalidate the explanation. Some philosophers would say that the antecedent desire to achieve the end is what does the explaining: Others might say that the explaining is done by the nature of the goal and the fact that the action promoted the chances of realizing It. In any case, it should not be automatically assumed that such explanations are causal. Philosophers differ considerably on whether these explanations are to be framed in terms of cause or reason.

The distinction between reason and causes is motivated in good part by a desire to separate the rational from the natural order. Many who have insisted on distinguishing reasons from causes have failed to distinguish two kinds of reason. Consider my reason for sending a letter by express mail. Asked why I did so, I might say I wanted to get it there in a day, or simply: to get it there in a day. Strictly, the reason is expressed by ‘to get it there in a day’. But what this expresses are my reasons only because I am suitably motivated, in that I am in a reason state, wanting to get the letter there in a day. It is reason states ~ especially wants, beliefs and intentional ~ and not reasons strictly so called, that are candidates for causes. The latter are abstract contents of propositional altitudes, as the former are psychological elements that play motivational roles.

It has also seemed to those who deny that reasons are causes that the former justifies, as well as explain the actions for which they are reasons, whereas the role of causes is at most to explain. Another claim is that the relation between reasons (and here reason states are often cited explicitly) and the action they explain is non-contingent: Whereas, the relation of causes to their effects is contingent. The ‘logical connection argument’ proceeds from this claim to the conclusion that reasons are mot causes.

All the same, the explanation as framed in terms of reason and causes, and there are many differing analyses of such concepts as intention and agency. Expanding the domain beyond consciousness. Freud maintained, in addition, that much human behaviour can be explained in terms of unconscious wishes. These Freudian explanations should probably be construed as basically causal.

Problems arise when teleological explanations are offered in other context. The behaviour of non-human animals is often explained in terms of purpose, e.g., the mouse ran to escape from the cat. In such cases the existence of conscious purpose seems dubious. The situation is still more problematic when a super-empirical purpose is invoked ~, e.g., the explanation of living species in terms of God’s purpose, or the vitalistic explanation of biological phenomena in terms of an entelechy or vital principle. In recent years an ‘anthropic principle’ has received attention in cosmology. All such explanations have been condemned by many philosophers as anthropomorphic.

The preceding objection, for and all, that philosophers and scientists often maintain that functional explanations play an important and legitimate role in various sciences such as evolutionary biology, anthropology and sociology. For example, the case of the peppered moth in Liverpool, the change in colour and back again to the light phase provided adaption to a changing environment and fulfilled the function of reducing predation on the species. In the study of primitive societies anthropologists have maintained that various rituals, e.g., a rain dance, which may be inefficacious in brings about their manifest goals, e.g., producing rain. Actually fulfil the latent function of increasing social cohesion at a period of stress, e.g., theological and/or functional explanations in common sense and science often take pains to argue that such explanations can be analysed entirely in terms of efficient causes, thereby escaping the change of anthropomorphism, yet not all philosophers agree.

Mainly to avoid the incursion of unwanted theology, metaphysics, or anthropomorphism into science, many philosophers and scientist ~ especially during the first half of the twentieth century ~ held that science provides only descriptions and predictions of natural phenomena, but not explanations. Beginning in the 1930s, however, a series of influential philosophers of science ~ including Karl Pooper (1935) Carl Hempel and Paul Oppenheim (1948) and Hempel (1965) ~ maintained that empirical science can explain natural phenomena without appealing to metaphysics and theology. It appears that this view is now accepted by a vast majority of philosophers of science, though there is sharp disagreement on the nature of scientific explanation.

The previous approach, developed by Hempel Popper and others became virtually a ‘received view’ in the 1960s and 1970s. According to this view, to give scientific explanation of an natural phenomenon is to show how this phenomenon can be subsumed under a law of nature. A particular rupture in a water pipe can be explained by citing the universal law that water expands when it freezes and the fact that the temperature of the water in the pipe dropped below the freezing point. General laws, as well as particular facts, can be explained by subsumption. The law of conservation of linear momentum can be explained by derivation from Newton’s second and third laws of motion. Each of these explanations is a deductive argument: The premisses constitute the explanans and the conclusion is the explanandum. The explanans contain one or more statements of universal laws and, in many cases, statements describing initial conditions. This pattern of explanation is known as the ‘deductive-nomological model’ any such argument shows that the explanandum had to occur given the explanans.

Moreover, in contrast to the foregoing views ~ which stress such factors as logical relations, laws of nature and causality ~ a number of philosophers have argued that explanation, and not just scientific explanation, can be analysed entirely in pragmatic terms.

During the past half-century much philosophical attention has been focussed on explanation in science and in history. Considerable controversy has surrounded the question of whether historical explanation must be scientific, or whether history requires explanations of different types. Many diverse views have been articulated: the foregoing brief survey does not exhaust the variety.

In everyday life we encounter many types of explanation, which appear not to raise philosophical difficulties, in addition to those already of mention. Prior to take-off a flight attendant explains how to use the safety equipment on the aeroplane. In a museum the guide explains the significance of a famous painting. A mathematics teacher explains a geometrical proof to be a bewildered student. A newspaper story explains how a prisoner escaped. Additional examples come easily to mind. The main point is to remember the great variety of context in which explanations are sought and given.

Another item of importance to epistemology is the widely held notion that non-demonstrative inference can be characterized as the inference to the best explanation. Given the variety of views on the nature of explanation, this popular slogan can hardly provide a useful philosophical analysis.

The inference to the best explanation is claimed by many to be a legitimate form of non-deductive reasoning, which provides an important alternative to both deduction and enumerative induction. Some would claim it is only through reasoning to the best explanation that one can justify beliefs about the external world, the past, theoretical entities in science, and even the future. Consider belief about the external world and assume that we know what we do about our subjective and fleeting sensations. It seems obvious that we cannot deduce any truths about the existence of physical objects from truths describing the character of our sensations. But neither can we observe a correlation between sensations and something other than sensations since by hypothesis all we have to rely on ultimately is knowledge of our sensations. Nonetheless, we may be able to posit physical objects as the best explanation for the character and order of our sensations. In the same way, various hypotheses about the past might best explain present memory: Theatrical postulates in physics might best explain phenomena in the macro-world, and it is possible that our access to the future is through past observations. But what exactly is the form of an inference to the best explanation?

When one presents such an inference in ordinary discourse it often seems to have as of:

1. ‘O’ is the case

2. If ‘E’ had been the case ‘O’ is what we would expect,

Therefore there is a high probability that:

3. ‘E’ was the case.

This is the argument form that Peirce (1839-1914) called ‘hypophysis’ or ‘abduction’. To consider a very simple example, we might upon coming across some footsteps on the beach, reason to the conclusion that a person walking along the beach recently by noting that if a person had walked along the beach one would expect to find just such footsteps.

But is abduction a legitimate form of reasoning? Obviously, if the conditional in (2) above is read as a material conditional such arguments would be hopelessly based. Since the proposition that ‘E’ materially implies ‘O’ is entailed by ‘O’, there would always be an infinite number of competing inferences to the best explanation and none of them would seem to lend support to its conclusion. The conditionals we employ in ordinary discourse, however, are seldom, if ever, material conditionals. Such that the vast majority of ‘if . . . ,. Then . . . ‘ statements do not seem to be truth-functionally complex. Rather, they seem to assert a connection of some sort between the states of affairs referred to in the antecedent (after the ‘if’) and in the consequent (after the ‘then’). Perhaps the argument form has more plausibility if the conditional is read in this more natural way. But consider an alternative footsteps explanation:

1. There are footprints on the beach

2. If cows wearing boots had walked along the beach recently one would expect to find such footprints

Therefore. There is a high probability that:

3. Cows wearing boots walked along the beach recently.

This inference has precisely the same form as the earlier inference to the conclusion that people walked along the beach recently and its premisses are just as true, but we would no doubt regard both the conclusion and the inference as simply silly. If we are to distinguish between legitimate and illegitimate reasoning to the best explanation it would seem that we need a more sophisticated model of the argument form. It would seem that in reasoning to an explanation we need criteria for choosing between alternative explanations. If reasoning to the best explanation is to constitute a genuine alternative to inductive reasoning. It is important that these criteria not be implicit premisses which will convert our argument into an inductive argument. Thus, for example, if the reason we conclude that people rather than cow walked along the beach is only that we are implicitly relying on the premiss that footprints of this sort are usually produced by people,. Then it is certainly tempting to suppose that our inference to the best explanation was really a disguised inductive inference of the form:

1. Most footprints are produced by people.

2. Here are footprints

Therefore in all probability,

3. These footprints were produced by people.

If we follow the suggestion made above, we might construe the form of reasoning to the best explanation, such that:

1. ‘O’ (a description of some phenomenon).

2. Of the set of available and competing explanations E1, E2 . . . , En capable of explaining ‘O’, E1 is the best according to the correct criteria for choosing among potential explanations.

Therefore in all probability,

3. E1.

Here too, is a crucial ambiguity in the concept of the best explanation. It might be true of an explanation E1 that it has the best chance of being correct without it being probable that E1 is correct. If I have two tickets in the lottery and one hundred other people each have one ticket, I am the person who has the best chance of winning, but it would be completely irrational to conclude on that basis that I am likely to win. It is much more likely that one of the other people will win than I will win. To conclude that a given explanation is actually likely to be correct on must hold that it is more likely that it is true than that the distinction of all other possible explanations is correct. And since on many models of explanation the number of potential explanations satisfying the formal requirements of adequate explanation is unlimited. This will be a normal feat.

Explanations are also sometimes taken to be more plausible the more explanatory ‘power’ they have. This power is usually defined in terms of the number of things or more likely, the number of kinds of things, the theory can explain. Thus, Newtonian mechanics were so attractive, the argument goes, partly because of the range of phenomena the theory could explain.

The familiarity of an explanation in terms of explanations is also sometimes cited as a reason for preferring that explanation to less familiar kinds of explanation. So if one provides a kind of evolutionary explanation for the disappearance of one organ in a creature, one should look more favourably on a similar sort of explanation for the disappearance of another organ.

Evaluating the claim that inference to the best explanation constitutes a legitimate and independent argument form. One must explore the question of whether it is a contingent fact that, at least, most phenomena have explanations and that explanations that satisfy a given criterions, simplicities, for example, are more likely to be correct. While it might be nice if the universe were structured in such a way that simple, powerful, familiar explanations were usually the correct explanation, it is difficult to avoid the conclusion that if this is true it would be an empirical fact about our universe discovered only a posteriori. If the reasoning to the explanation relies on such criteria, it seems that one cannot without circularity use reasoning to the best explanation to discover that the reliance on such criteria is safe. But if one has some independent way of discovering that simple, powerful, familiar explanations are more often correct, then why should we think that reasoning to the best explanation is an independent source of information about the world? Again, why should we not conclude that it would be more perspicuous to represent the reasoning this way:

1. Most phenomena have the simplest, most powerful, familiar explanations available

2. Here is an observed phenomenon, and E1 is the simplest, most powerful, familiar explanation available.

Therefore, in all probability,

3. This is to be explained by E1.

But the above is simply an instance of familiar inductive reasoning.



There are various ways of classifying mental activities and states. One useful distinction is that between the propositional attitudes and everything else. A propositional attitude in one whose description takes a sentence as complement of the verb. Belief is a propositional attitude: One believes (truly or falsely as the case may be), that there are cookies in the jar. That there are cookies in the jar is the proposition expressed by the sentence following the verb. Knowing, judging, inferring, concluding and doubts are also propositional attitudes: One knows, judges, infers, concludes, or doubts that a certain proposition (the one expressed by the sentential complement) is true.

Though the propositions are not always explicit, hope, fear, expectation. Intention, and a great many others terms are also (usually) taken to describe propositional attitudes, one hopes that (is afraid that, etc.) there are cookies in the jar. Wanting a cookie is, or can be construed as, a propositional attitude: Wanting that one has (or eat or whatever) a cookie, intending to eat a cookie is intending that one will eat a cookie.

Propositional attitudes involve the possession and use of concepts and are, in this sense, representational. One must have some knowledge or understanding of what χ’s are in order to think, believe or hope that something is ‘χ’. In order to want a cookie, intend to eat one must, in some way, know or understand what a cookie is. One must have this concept. There is a sense in which one can want to eat a cookie without knowing what a cookie is ~ if, for example, one mistakenly thinks there are muffins in the jar and, as a result wants to eat what is in the jar (= cookies). But this sense is hardly relevant, for in this sense one can want to eat the cookies in the jar without wanting to eat any cookies. For this reason(and this sense) the propositional attitudes are cognitive: They require or presuppose a level of understanding and knowledge, this kind of understanding and knowledge required to possess the concepts involved in occupying the propositional state.

Thought there is sometimes disagreement about their proper analysis, non-propositional mental states, yet do not, at least on the surface, take propositions as their object. Being in pain, being thirsty, smelling the flowers and feeling sad are introspectively prominent mental states that do not, like the propositional attitudes, require the application or use of concepts. One doesn’t have to understand what pain or thirst is to experience pain or thirst. Assuming that pain and thirst are conscious phenomena, one must, of course, be conscious or aware of the pain or thirst to experience them, but awareness of must be carefully distinguished from awareness that. One can be aware of ‘χ’, ~ thirst or a toothache ~ without being aware that, that, e.g., thirst or a toothache, is that like beliefs that and knowledge that, are a propositional attitude, awareness of is not.

As the examples, pain, thirst, tickles, itches, hungers are meant to suggest, the non-propositional states have a felt or experienced [‘phenomenal’] quality to them that is absent in the case of the propositional attitudes. Aside from who it is we believe to be playing the tuba, believing that John is playing the tuba is much the same as believing that Joan is playing the tuba. These are different propositional states, different beliefs, yet, they are distinguished entirely in terms of their propositional content ~ in terms of what they are beliefs about. Contrast this with the difference between hearing John play the tuba and seeing him play the tuba. Hearing John play the tuba and seeing John play the tubas differ, not just (as do beliefs) in what they are of or about (for these experiences are, in fact, of the same thing: John playing the tuba), but in their qualitative character, the one involves a visual, the other an auditory, experience. The difference between seeing John play the tuba and hearing John play the tuba, then, is a sensory not a cognitive difference.

Some mental states are a combination of sensory and cognitive elements. Fear and terror, sadness and anger, joy and depression, are ordinarily thought of in this way sensations are: Not in terms of what propositions (if any) they represent, but (like visual and auditory experience) in their intrinsic character, in how they feel to the person experiencing them. But when we describe a person for being afraid that, sad that, upset that (as opposed too merely thinking or knowing that) so-and-so happened, we typically mean to be describing the kind of sensory (feeling or emotional) quality accompanying the cognitive state. Being afraid that the dog is going to bite me is both to think (that he might bite me) ~ a cognitive state ~ and feel fear or apprehension (sensory) at the prospect.

The perceptual verbs exhibit this kind of mixture, this duality between the sensory and the cognitive. Verbs like ‘to hear’, ‘to say’, and ‘to feel’ is [often] used to describe propositional (cognitive) states, but they describe these states in terms of the way (sensory) one comes to be in them. Seeing that there are two cookies left by seeing. Feeling that there are two cookies left is coming to know this in a different way, by having tactile experiences (sensations).

On this model of the sensory-cognitive distinction (at least it is realized in perceptual phenomena). Sensations are a pre-conceptual, a pre-cognitive, vehicle of sensory information. The terms ‘sensation’ and ‘sense-data’ (or simply ‘experience’) were (and, in some circles, still are) used to describe this early phase of perceptual processing. It is currently more fashionable to speak of this sensory component in perception as the percept, the sensory information store, is generally the same: An acknowledgement of a stage in perceptual processing in which the incoming information is embodied in ‘raw’ sensory (pre-categorical, pre-0recognitional) forms. This early phase of the process is comparatively modular ~ relatively immune to, and insulated from, cognitive influence. The emergence of a propositional [cognitive] states ~ seeing that an object is red ~ depends, then, on the earlier occurrence of a conscious, but nonetheless, non-propositional condition, seeing (under the right condition, of course) the red object. The sensory phase of this process constitutes the delivery of information (about the red object) in a particular form (visual): Cognitive mechanisms are then responsible for extracting and using this information ~ for generating the belief (knowledge) that the object is red. (The belief of blindness suggests that this information can be delivered, perhaps in degraded form, at a non-conscious level.)

To speak of sensations of red objects, tuba and so forth, is to say that these sensations carry information about an object’s colour, its shape, orientation, and position and (in the case of audition) information about acoustic qualities such as pitch, timbre, volume. It is not to say that the sensations share the properties of the objects they are sensations of or that they have the properties they carry information about. Auditory sensations are not loud and visual sensations are not coloured. Sensations are bearers of nonconceptualized information, and the bearer of the information that something is red need not itself be red. It need not even be the sort of thing that could be red: It might be a certain pattern of neuronal events in the brain. Nonetheless, the sensation, though not itself red, will (being the normal bearer of the information) typically produce in the subject who undergoes the experience a belief, or tendency to believe, that something red is being experienced. Hence the existence of hallucinations.

Just as there are theories of the mind, that would deny the existence of any state of mind whose essence was purely qualitative (i.e., did not consists of the state’s extrinsic, causal, properties) there are theories of perception and knowledge ~ cognitive theories ~ that denies a sensory component to ordinary sense perception. The sensor y dimension (the look, feel, smell, taste of things) is (if it is not altogether denied) identified with some cognitive condition (knowledge or belief) of the experienced. All seeing (not to mention hearing, smelling and feeling) becomes a form of believing or knowing. As a result, organisms that cannot know cannot have experiences. Often, to avoid these striking counterintuitive results, implicit or otherwise unobtrusive (and, typically, undetectable) forms of believing or, knowing.

Aside, though, from introspective evidence (closing and opening one’s eyes, if it changes beliefs at all, doesn’t just change beliefs, it eliminates and restores a distinctive kind of conscionable experience), there is a variety of empirical evidence for the existence of a stage in perceptual processing that is conscious without being cognitive (in any recognizable sense). For example, experiments with brief visual displays reveal that when subjects are exposed for very brief (50 msec.) Intervals to information-rich stimuli, there is persistence (at the conscious level) of what is called an image or visual icon that embodies more information about the stimulus than the subject can cognitively process or report on. Subjects cab exploit the information in this persisting icon by reporting on any part of the absent array of numbers (the y can, for instance, reports of the top three numbers, the middle three or the bottom three). They cannot, however, identify all nine numbers. The y report seeing all nine, and the y can identify any one of the nine, but they cannot identify all nine. Knowledge and brief, recognition and identification ~ these cognitive states, though present for any two or three numbers in the array, are absent for all nine numbers in the array. Yet, the image carries information about all nine numbers (how else accounts for subject’s ability to identify any number in the absent array?) Obviously, then, information is there, in the experience itself, whether or not it is, or even can be. As psychologists conclude, there is a limit on the information processing capacities of the latter (cognitive) mechanisms that is not shared by the sensory stages themselves.

Perceptual knowledge is knowledge acquired by or through the senses. This includes most of what we know. Some would say it includes everything we know. We cross intersections when we see the light turn green, head for the kitchen when we smell the roast burning, squeeze the fruit to determine its ripeness, and climb out of bed when we hear the alarm, ring. In each case we come to know something ~ that the light has turned green, that the roast is burning, that the melon is overripe, and that it is time to get up ~ that the light has turned green ~ by use of the eyes. Feeling that the melon is overripe in coming to know a fact ~ that the melon is overripe ~ by one’s sense of touch. In each case the resulting knowledge is somehow based on, derived from or grounded in the sort of experience that characterizes the sense modality in question.

Seeing a rotten kumquat is not at all like the experience of smelling, tasting or feeling a rotten kumquat. Yet all these experiences can result in the same knowledge ~ Knowledge that the kumquat is rotten. Although the experiences are much different, they must, if they are to yield knowledge, embody information about the kumquat: The information that it is rotten. Seeing that the fruit is rotten differs from smelling that it is rotten, not in what is known, but how it is known. In each case, the information has the same source ~ the rotten kumquat ~, but it is, so top speak, delivered via different channels and coded and re-coded in different experiential neuronal excitations as stimulated sense attractions.

It is important to avoid confusing perceptual knowledge of facts, e.g., that the kumquat is rotten, with the perception of objects, e.g., rotten kumquats. It is one thing ro see (taste, smell, feel) a rotten kumquat, and quite another to know (by seeing or tasting) that it is a rotten kumquat. Some people, after all, don not know what kumquats look like. They see a kumquat but do not realize (do mot see that) it is a kumquat. Again, some people do not know what a kumquat smell like. They smell a rotten kumquat and ~ thinking, perhaps, that this is a way this strange fruit is supposed to smell ~ do no t realize from the smell, i.e., do not smell that it is a rotted kumquat. In such cases people see and smell rotten kumquats ~ and in this sense perceive rotten kumquat ~ and never know that they are kumquats ~ let alone rotten kumquats. They cannot, not at least by seeing and smelling, and not until they have learned something about (rotten) kumquats. Since the topic as such is incorporated in the perceptual knowledge ~ knowing, by sensory means, that something if ‘F’ ~, we will be primary concerned with the question of what more, beyond the perception of F’s, is needed to see that (and thereby know that) they are ‘F’. The question is, however, not how we see kumquats (for even the ignorant can do this) but, how we know (if, that in itself, that we do) that, that is what we see.

Much of our perceptual knowledge is indirect, dependent or derived. By this is that it is meant that the facts we describe ourselves as learning, as coming to know, by perceptual means are pieces of knowledge that depend on our coming to know something else, some other fat, in a more direct way. We see, by the gauge, that we need gas, see, by the newspapers, that our team has lost again, or see, by her expression that is nervous. This derived or dependent sort of obtainable knowledge is particularly prevalent in the case of vision but it occurs, to a lesser degree, in every sense modality. We install bells and other noise-makers so that we can, for example, hear (by the bells) that someone is at the door and (by the alarm) that its time to get away. When we obtain knowledge in this way. It is clear that unless one sees ~ hence, comes to know. Something about the gauge (that it reads ‘empty’), the newspaper (which is says) and the person’s expression, one would not see (hence, know) what one is described as coming to know by perceptual means. If one cannot hear that the bell is ringing, one cannot ~ not at least in this way ~ hear that one’s visitors have arrived. In such cases one sees (hears, smells, etc.) that ‘a’ is ‘F’, coming to know thereby that ‘a’ is ‘F’, by seeing (hearing, etc.) that some other condition, b’s being ‘G’, obtains. When this occurs, the knowledge (that ‘a’ is ‘F’) is derived, or dependent on, the more basic perceptual knowledge that ‘b’ is ‘G’.

Though perceptual knowledge about objects is often, in this way, dependent on knowledge of fats about different objects, the derived knowledge is sometimes about the same object. That is, we see that ‘a’ is ‘F’ by seeing, not that some other object is ‘G’, but that ‘a’ itself is ‘G’. We see, by her expression, that she is nervous. She tells that the fabric is silk (not polyester) by the characteristic ‘greasy’ feel of the fabric itself (not, as I do, by what is printed on the label). We tell whether it is an oak tree, a Porsche, a geranium, an igneous rock or a misprint by its shape, colour, texture, size, behaviour and distinctive markings. Perceptual knowledge of this sort is also deprived ~ derived from the more basic facts (about ‘a’) we use to make the identification. In this case the perceptual knowledge is still indirect because, although the same object is involved, the facts we come to know about it are different than the facts that enable us to know it.

Derived knowledge is sometimes described as inferential, but this is misleading, at the conscious level there is no passage of the mind from premise to conclusion, no reasoning, no problem-solving. The observer, the one who sees that ‘a’ is ‘F’ by seeing that ‘b’ (or ‘a’ itself) is ‘G’, need not be (and typically is not) aware of any process of inference, any passage of the mind from one belief to another. The resulting knowledge, though logically derivative, is psychologically immediate. I could see that she was getting angry: so, I moved my hand. I did not ~ at least not at any conscious level ~ infer (from her expression and behaviour) that she was getting angry. I could (or, so it seemed to me) see that she was getting angry. It is this psychological immediacy that makes indirect perceptual knowledge a species of perceptual knowledge.

The psychological immediacy that characterises so much of our perceptual knowledge ~ even (sometimes) the most indirect and derived forms of it ~ does not mean that learning is not required to know in this way. One is not born with (may, in fact, never develop) the ability to recognize daffodils, muskrats and angry companions. It is only after a long experience that one is able visually to identify such things. Beginners may do something corresponding to inference: They recognize relevant features of trees, birds, and flowers, factures they already know how to perceptually identify, and then infer (conclude), on the basis of what they see, and under the guidance of more expert observers, that its an oak a finch or a geranium. But the experts (and we are all experts on many aspects of our familiar surroundings) do not typically go through such a process. The expert just sees that its an oak, a finch or a geranium. The perceptual knowledge of the expert is still dependent, of course, since even an expert cannot see what kind of flower it is if she cannot first see its colour and shape, but it is to say, that the expert has developed identificatory skills that no longer require the sort of conscious inferential processes that characterize a beginner’s efforts.

Coming to know that ‘a’ is ‘F’ by seeing that ‘b’ is ‘G’ obviously requires some background assumption on the part of the observer, an assumption to the effect that ‘a’ is ‘F’ (or perhaps only probable ‘F’) when ‘b’ is ‘G’. If one does not assume (as taken to be granted) that the gauge is properly connected, and does not, thereby assume that it would not register ‘empty’,unless the tank was nearly empty, then even if one could see that it registered ‘empty’, one would not learn ( hence, would not see) that one needed gas. At least, one would not see it by consulting the gauge. Likewise, in trying to identify birds, its no use being able to see their markings if one doesn’t know something about which birds have which marks ~ sometimes of the form: A bird with these markings is (probably) a finch.

It would seem, moreover, that these background assumptions, if they are to yield knowledge that ‘a’ is ‘F’, as they must if the observer is to see (by b’s being ‘G’) that ‘a’ is ‘F’, must themselves qualify as knowledge. For if this background fact is not known, if it is not known whether ‘a’ is ‘F’ when ‘b’ is ‘G’, then the knowledge of b’s being ‘G’, taken by itself, powerless to generate the knowledge that ‘a; is ‘F’. If the conclusion is to be known to be true, both the premises used to reach that conclusion must be known to be true. Or so it would seem.

Externalists, however, argue that the indirect knowledge that ‘a’ is ‘F’, though it may depend on the knowledge that ‘b’ is ‘G’, does not require knowledge of the connecting fact, the fact that ‘a’ is ‘F’ when ‘b’ is ‘G’. Simple belief (or, perhaps, justified belief, there are stronger and weaker versions of externalism) in the connecting fact is sufficient to confer a knowledge of the fact is sufficient to confer a knowledge e of the connected fact. Even if, strictly speaking, I don’t know she is nervous whenever she fidgets like that, I can nonetheless, see and hence know, that she is nervous by the way she fidgets, if I (correctly) assume that his behaviour r is a reliable expression of nervousness. One need not know the gauge is working well to make observations (acquire observational knowledge) with it. All that is required, besides the observer believing that the gauge is reliable, is that the gauge, in fact, be reliable, i.e., that the observer’s background beliefs be true. Critics of externalisms have been quick to point out that this theory has the unpalatable consequence that knowledge can be made possible by ~ and, in this sense, be made to rest on ~ lucky hunches (that turn out true) and unsupported (even irrational) beliefs. Surely, internalist argue, if one is going t o know that ‘a’ is ‘F’ on the basis of b’s being ‘G’, one should have (as a bare minimum) some justification for thinking that ‘a’ is ‘F’, or is probably ‘F’, when ‘b’ is ‘G’.

Whatever view one takes about these matters (with the possible exception of extreme externalism) indirect perception obviously requires some understanding (knowledge? Justification? Belief?) of the general relationship between the fact one comes to know (that ‘a’ is ‘F’) and the facts (that ‘b’ is ‘G’) that enable one to know it. And it is this requirement on background knowledge or understanding that leads to questions to questions about the possibility of indirect perceptual knowledge. Is it really knowledge? The first question is inspired by sceptical doubts about whether we can ever know the connecting facts in question. How is it possible to learn, to acquire knowledge of, the connecting facts knowledge of which is necessary to see,. By b’s being ‘G’, and that ‘a’ is ‘F’? These connecting facts do not appear to be perceptually knowable. Quite the contrary, they appear to b e general truths knowable (if knowable at all) by inductive inference e from past observations. And if one is sceptical about obtaining knowledge in this indirect, inductive way one is, perforce, sceptical about the existence of the kind of indirect knowledge, including indirect perceptual knowledge of the set described, in that depends on it.

Even if one puts aside such sceptical questions, how ever, there remains a legitimate concern about the perceptual character of this kind knowledge. If one sees that ‘a’ is ‘F’ by seeing that ‘b’ is ‘G’, is really seeing that ‘a’ is ‘F’? Isn’t perception merely a part ~ and, from an epistemological standpoint, the less significant part ~ of the process whereby one comes to know that ‘a’ is ‘F’. One must, it is true, sere that ‘b’ is ‘G’, but this is only one of the premises needed to reach the conclusion (knowledge) that ‘a’ is ‘F’. There is also the background knowledge that is essential to the process. If we think of a theory as any factual proposition, or set of factual propositions, that cannot itself be known in some direct observational way, we can express this worry by saying that indirect perception is always theory-loaded: Seeing (indirectly)that ‘a’ is ‘F’ is only possible if the observer already has knowledge of (justification for, belief in) some theory, the theory ‘connecting’ the fast one cannot come to know (that ‘a’ is ‘F’) with the fact (that ‘b’ is ‘G’) that enables one to know it.

This, of course, reverses the standard foundationalist picture of human knowledge. Instead of theoretical knowledge depending on, and being derived from, perception, perception (of the indirect sort) presupposes a prior knowledge.

Foundationalists are quick to point out that this apparent reversal in the structure of human knowledge is only apparent. Our indirect perception of facts depends on theory, yes, but this merely shows that indirect perceptual knowledge is not part of the foundation. To reach the kind of perceptual knowledge that lies at the foundation, we need to look at a form of perception that is purified of all theoretical elements. This then, will be perceptual knowledge pure and direct. No background knowledge or assumptions about connecting regularities are needed in direct perception because the known facts are presented directly and immediately and not (as, in indirect perception) on the basis of some other facts. In direct perception all the justification (needed for knowledge) is right there in the experience itself.

What, then, about the possibility of perceptual knowledge pure and direct, the possibility of coming to know, on the basis of sensory experience, that ‘a’ is ‘F’ where this does not require assumptions or knowledge that has a source outside the experience itself? Where is this epistemological ‘pure gold’ to be found?

There are, basically, two views about the nature of direct perceptual knowledge (coherentists would deny that any of our knowledge is basic in this sense). These views (following traditional nomenclature) can be called ‘direct realism’ and ‘representationalism’ or ‘representative realism’. A representationalist restricts direct perceptual knowledge to objects of a very special sort: Ideas, impressions, or sensations, sometimes called sense-data ~ entities in the mind of the observer. One directly perceives a fact, e.g., that ‘b’ is ‘G’ , only when ‘b’ is a mental entity of some sort ~ a subjective appearance or sense-data ~ and ‘G’ is a property of this datum. Knowledge of these sensory states is supposed to be certain and infallible. These sensory facts are, so o speak, right up against the mind’s eye. One cannot be mistaken about these facts for these facts are, in reality, facts about the way things appear to be, and one cannot be mistaken about the way things appear to be. Normal perception of external conditions, then, turns out to be (always) a type of indirect perception. One ‘sees’ that there is a tomato in front of one by seeing that the appearance (of the tomato) have a certain quality (reddish and bulgy) and inferring as this is topically said to be automatic and unconscious, on the basis of certain background assumptions, e.g., that there typically is a tomato in front of one when one has experiences of this sort, that there is a tomato in front of one. All knowledge of objective reality, then, even what commonsense regards as the most direct perceptual knowledge, is based on an even more direct knowledge of the appearances.

For the representationalist, then, perceptual knowledge of our physical surroundings is always theory-loaded and indirect. Such perception is ‘loaded’

with the theory that there is some regular, some uniform, correlation between the way things appear (known in the perceptually direct way) and the way things actually are (known, if known at all, in a perceptual indirect way).

The second view, direct realism, refuses to restrict perceptual knowledge, to an inner world of subjective experience. Though the direct realist is willing to concede that much of our knowledge of the physical world is indirect, however, direct and immediate it may sometimes feel, some perceptual knowledge of physical reality is direct. What makes it direct is that such knowledge is not based on, nor in any way dependent on, other knowledge and belief. The justification needed for the knowledge is right there in the experience itself.

To understand the way this is supposed to work, consider an ordinary example, ‘S’ identifies a banana (learns that it is a banana) by noting its shape and colour ~ perhaps, even tasting and smelling it (to make sure its not wax). In this case the perceptual knowledge that is a banana is (the direct realist admits) indirect, dependence on S’s perceptual knowledge of its shape, colour, smell, and taste. ‘S’ learns that it is a banana by seeing that it is yellow, banana-shaped, etc. Nonetheless, S’s perception of the banana’s colour and shape is not indirect. ‘S’ does not see that the object is yellow, for example, by seeing, knowing, believing anything more basic ~ either about the banana or anything else, e.g., his own sensations of the banana. ‘S’ has learned to identify such features, of course, but when ‘S’ learned to do is not an inference, even a unconscious inference, from other things be believes. What ‘S’ acquired was a cognitive skill, a disposition to believe of yellow objects he saw that the y were yellow. The exercise of this skill does not require, and in no way depends on, the having of any other beliefs. ‘S’s identificatorial successes will depend on his operating in certain special conditions, of course, ‘S’ will not, perhaps, be able to visually identify yellow objects in drastically reduced lighting, at funny viewing angles, or when afflicted with certain nervous disorders. But these facts about ‘S’ can see that something is yellow does not show that his perceptual knowledge (that ‘a’ is yellow) in any way deepens on a belief )let alone knowledge) that he is in such special conditions. It merely shows that direct perceptual knowledge is the result of exercising a skill, an indentificatoial skill, that like any skill,. Requires certain conditions for its successful exercise. An expert basketball player cannot shoot accurately in a hurricane. He needs normal conditions to do what he has learned to do. So also, with individuals who have developed perceptual (cognitive) skills. They need normal conditions to do what they have learned to do. They need normal conditions to see, for example, that something is yellow. But they do not, any more than the basketball player, have to know they are in these conditions to do what being in these conditions enables them to do.

This means, of course, that for direct realist direct perceptual knowledge is fallible and corrigible. Whether ‘S’ sees that ‘a’ is ‘F’ depends on his being caused to believe that ’a’ is ‘F’ in conditions that are appropriate for an exercise of that cognitive skill. If conditions are right, then ‘S’ sees (hence, knows) that ‘a’ is ‘F’. If they aren’t he doesn’t. Whether or not ‘S’ knows depends, then , not on what else, if anything, ‘S’ believes, but on the circumferences in which ‘S’ comes to believe. This being so, this type of direct realism is a form of externalism, direct perception of objective facts, pure perceptual knowledge of external events, is made possible because what is needed, by way of justification for such knowledge has been reduced. Background knowledge ~ and, in particular, the knowledge that the experience does, and suffices for knowing ~ is not needed.

This mans that the foundations of knowledge are fallible. Nonetheless, though fallible, they are in no way derived. That is what makes them foundations. Even if they are brittle, as foundations sometimes are, everything else rests upon them

The theory of representative realism holds that (1) there is a world whose existence and nature is independent of us and of our perceptual experience of it, and (2) perceiving an object located in that external world necessarily involves causally interacting with that object, (3) the information acquired in perceiving an object is indirect: It is information most immediately about the perceptual experience caused in us by the object, and only derivatively about the object itself:

Clause 1. Makes representative realism a species of realism.

Clause 2. Makes it a species of causal theory of perception.

Clause 3. Makes it a species of representative as opposed

to direct realism.

Traditionally, representative realism has been allied with an act/object analysis of sensory experience. Its act/object analysis is traditionally a major plank in arguments for representative realism. According to the act/object analysis of experience with content involves an object of experience to which the subject is related by an act of awareness (the event of experiencing that object). This is meant to apply not only to perceptions, which have material objects (whatever is perceived), but also to experiences like hallucinations and dream experiences, which do not. Such experiences nonetheless, appear to represent something,. And their objects are supposed to be whatever it is that they represent. Act/object theorists may differ on the nature of objects of experience, which have been treated as properties, Meinongian objects (which may not exist or have any form of being), and, more commonly, private mental entities with sensory qualities. (The term ‘sense-data’ is now usually applied to the latter, but has also been used as a general term for objects of sense experiences, as in the work of G.E. Moore.) Act/object theorists may also differ on the relationship between objects of experience and objects of perception. In terms of representative realism, objects of perception (of which we are ‘indirectly aware’). Meinongians, however, may simply treat objects of perception as existing objects of experience.

Realism in any area of thought is the doctrine that certain entities allegedly associated with that area are indeed real. Common sense realism ~ sometimes called ‘realism’, without t qualification ~ says that ordinary things like chairs and trees and people are real. Scientific realism says that theoretical posits like electrons and fields of force and quarks are equally real. And psychological realism says mental states like pain and beliefs are real. Realism can be upheld ~ and opposed ~ in all such areas, as it can with differently or more finely drawn provinces of discourse: For example, with discourse about colours, about the past, about possibility and necessity, or about matters of moral right and wrong. The realist in any such area insists on the reality of the entities in question in the discourse.

If realism itself can be given a fairly quick characterization, it is more difficult to chart the various forms of opposition, for they are legion. Some opponents deny that there are any distinctive posits associated with the area of discourse under dispute: A good example is the emotivist doctrine that moral discourse does not posit values but serves only, like applause and exclamation, to express feelings. Other opponents deny that the entities posited by the relevant discourse exists, or, at least, exists independently of our thinking about them: Here the standard example is ‘idealism’. And others again, insist that the entities associated with the discourse in question are tailored to our human capacities and interests and, to that extent, are as much a product of invention as a matter of discovery.

Nevertheless, one us e of terms such as ‘looks’, ‘seems’, and ‘feels’ is to express opinion. ‘It looks as if the Labour Party will win the next election’ expresses an opinion about the party’s chances and does not describe a particular kind of perceptual experience. We can, however, use such terms to describe perceptual experience divorced from any opinion to which the experience may incline us. A straight stick half in water looks bent, and does so to people completely familiar with this illusion who have, therefore, no inclination to hold that the stick is in fact bent. Such users of ‘looks’, ‘seems’, ‘tastes’, etc. are commonly called ‘phenomenological’.

The act/object theory holds that the sensory experience recorded by sentence employing sense are a matter of being directly acquainted with something which actually bears the red to me. I am acquainted with a red expanse (in my visual field): when something tastes bitter to me I am directly acquainted with a sensation with the property of being bitter, and so on and so forth. (If yu do not understand the term ‘directly acquainted’, stick a pin into your finger. The relation you will then bear to your pain, as opposed to the relation of concern you might bear to another’s pain when told about it, is an instance e of direct acquaintance e in the intended sense.)

The act/object account of sensory experience combines with various considerations traditionally grouped under the head of the argument for illusion to provide arguments for representative realism, or more precisely for the clause in it that contents that our senorily derived information about the world comes indirectly, that what we are most directly acquainted with is not an aspect of the world but an aspect for our mental sensory responses to it. Consider, for instance, the aforementioned refractive illusion, that of a straight stick in water looking bent. The act/object account holds that in this case we are directly acquainted with a bent shape. This shape, so the argument runs, cannot be the stick as it is straight, and thus, must be a mental item, commonly called a sense-datum. And, ion general sense-data-visual, tactual, etc. ~ are held to be the objects of direct acquaintance. Perhaps the most striking use of the act/object analysis to bolster representative realism turns on what modern science tells us about the fundamental nature of the physical world. Modern science tells us that the objects of the physical world around us are literally made up of enormously many, widely separated, tiny particles whose nature can be given in terms of a small number of properties like mass, charge, spin and so on. (These properties are commonly called the primary qualities, as primary and secondary qualities represent a metaphysical distinction with which really belong to objects in the world and qualities which only appear to belong to them, or which human beings only believe to belong to them, because of the effects those objects produce ion human beings, typically through the sense organs, that is to say, something that does not hold everywhere by nature, but is producing in or contributed by human beings in their interaction with a world which really contains only atoms of certain kinds in a void. To think that some objects in the world are coloured, or sweet ort bitter is to attribute to objects qualities which on this view they do not actually possess. Rather, it is only that some of the qualities which are imputed to objects, e.g., colour, sweetness, bitterness, which are not possessed by those objects. But, of course, that is not how the objects look to us, not how they present to our senses. They look continuous and coloured. What then can these coloured expanses with which we are directly acquainted be other than mental sense-data?

Two objections dominate the literature on representative realism: One goes back to Berkeley (1685-1753) and is that representative realism leads straight to scepticism about the external world, the other is that the act/object account of sensory awareness is to be rejected in favour of an adverbial account.

Traditional representative realism is a ‘veil of perception’ doctrine, in Bennett’s (1971) phrase. Lock e’s idea (1632-1704) was that the physical world was revealed by science to be in essence colourless, odourless, tasteless and silent and that we perceive it by, to put it metaphorically, throwing a veil over it by means of our senses. It is the veil we see, in the strictest sense of ‘see’. This does not mean that we do not really see the objects around us. It means that we see an object in virtue of seeing the veil, the sense-data, causally related in the right way to that object, an obvious question to ask, therefore, is what justifies us in believing that there is anything behind the veil, and if we are somehow justified in believing that there is something behind the veil,. How can we be confident of what it is like?

One intuition that lies at the heart of the realist’s account of objectivity is that, in the last analysis, the objectivity of a belief is to be explained by appeal to the independent existence of the entities it concerns: epistemological objectivity, this is, is to b e analysed in terms of ontological notions of objectivity. A judgement or belief is epistemological notions of objectivity, if and only if it stands in some specified reflation to an independently existing, determinate reality. Frége (1848-1925), for example, believed that arithmetic could comprise objective knowledge only if the numbers it refers to, the propositions it consists of, the functions it employs, and the truth-values it aims at, are all mind-independent entities. And conversely, within a realist framework, to show that the members of a given class of judgements are merely subjective, it is sufficient to show that there exists no independent reality that those judgements characterize or refer to.

Thus, it is favourably argued that if values are not part of the fabric of the world, then moral subjectivity is inescapable. For the realist, the, of epistemological notions of objectivity is to be elucidated by appeal to the existence of determinate facts, objects, properties, events and the like, which exit or obtain independent of any cognitive access we may have to them. And one of the strongest impulses towards platonic realism ~ the theoretical commitment to the existence of abstract objects like sets, numbers, and propositions ~ stems from the widespread belief that only if such things exist in their own right can we allow that logic, arithmetic and science are indeed objective. Though ‘Platonist’ realism in a sense accounts for mathematical knowledge, it postulates such a gulf between both the ontology and the epistemology of science and that of mathematics that realism is often said to make the applicability of mathematics in natural science into an inexplicable mystery

This picture is rejected by anti-realists. The possibility that our beliefs and theories are objectively true is not, according to them, capable of being rendered intelligible by invoking the nature and existence of reality as it is in and of itself. If our conception of epistemological objective notions is minimal, requiring only ‘presumptive universality’, then alternative, non-realist analysers of it can seem possible ~ and eve n attractive. Such analyses have construed the objectivity of an arbitrary judgement as a function of its coherence with other judgements, of its possession of grounds that warrant it,. Of its conformity to the a prior rules that constitute understanding, of its verifiability (or falsifiability), or if its permanent presence in the mind of God. On e intuitive common to a variety of different anti-realist theories is such that for our assertions to be objective, for our beliefs to comprise genuine knowledge, those assertions and beliefs must be, among other things, rational, justifiable, coherent, communicable and intelligible. But it is hard, the anti-realist claims, to see how such properties as these can be explained by appeal to entities as they are on and of themselves. On the contrary, according to most forms of anti-realism, it is only the basis of ontological subjective notions like ‘the way reality seems to us’, ‘the evidence that is available to us’, ‘the criteria we apply’, ‘the experience we undergo’ or ‘the concepts we have acquired’ that epistemological notions of objectivity of our beliefs can possibly be explained.

The reason by which a belief is justified must be accessible in principle to the subject hold that belief, as Externalists deny this requirement, proposing that this makes knowing too difficult to achieve in most normal contexts. The internalist-Externalists debate is sometimes also viewed as a debate between those who think that knowledge can be naturalized (Externalists) and those who do not (internalist) naturalists hold that the evaluative notions used in epistemology can be explained in terms of non-evaluative concepts ~ for example, that justification can be explained in terms of something like reliability. They deny a special normative realm of language that is theoretically different from the kinds of concepts used in factual scientific discourse. Non-naturalists deny this and hold to the essential difference between normative and the factual: The former can never be derived from or constituted by the latter. So internalists tend to think of reason and rationality as non-explicable in natural, descriptive terms, whereas, Externalists think such an explanation is possible.

Although the reason, . . . to what we think to be the truth. The sceptic uses an argumentive strategy to show the alternatives strategies that we do not genuinely have knowledge and we should therefore suspend judgement. But, unlike the sceptics, many other philosophers maintain that more than one of the alternatives are acceptable and can constitute genuine knowledge. However, it seems dubitable to have invoked hypothetical sceptics in their work to explore the nature of knowledge. These philosophers did no doubt that we have knowledge, but thought that by testing knowledge as severely as one can, one gets clearer about what counts as knowledge and greater insight results. Hence there are underlying differences in what counts as knowledge for the sceptic and other philosophical appearances. As traditional epistemology has been occupied with dissassociative kinds of debate that led to a dogmatism. Various types of beliefs were proposed as candidates for sceptic-proof knowledge, for example, those beliefs that are immediately derive by many as immune to doubt. Nevertheless, that they all had in common was that empirical knowledge began with the data of the senses, that this was safe from scepticism and that a further superstructure of knowledge was to be built on this firm basis.

It might well be observed that this reply to scepticism fares better as a justification for believing in the existence of external objects, than as a justification of the views we have about their nature. It is incredible that nothing independent of us is responsible for the manifest patterns displayed by our sense-data, but granting this leaves open many possibilities about the nature of the hypnotized external reality. Direct realists often make much of the apparent advantage that their view has in the question of the nature of the external world. The fact of the matter is, though, that it is much harder to arrive at tenable views about the nature of external reality than it is to defend the view that there is an external reality of some kind or other. The history of human thought about the nature of the external world is littered with what are now seen (with the benefit of hindsight) to be egregious errors ~ the four element theory, phlogiston, the crystal spheres, vitalism, and so on. It can hardly be an objection to a theory that makes the question of the nature of external reality much harder than the question of its existence.

The way we talk about sensory experience certainly suggests an act/object view. When something looks thus and so in the phenomenological sense, we naturally describe the nature of our sensory experience by saying that we are acquainted with a thus ans so ‘given’. But suppose that this is a misleading grammatical appearance, engendered by the linguistic propriety of forming complete, putatively referring expressions like ‘the bent shape on my visual field’, and that there is no more a bent shape inb existence for the representative realist to contend to be a mental sense-data, than there is a bad limp in existence when someone has, as we say, a bad limp. When someone has a bad limo, they limp badly, similarly, according to adverbial theorist, when, as we naturally put it, I am aware of a bent shape, we would better express the way things are by saying that I sense bent shape-ly. When the act/object theorist analyses as a feature of the object which gives the nature of the sensory experience, the adverbial theorist analyses as a mode of sense which gives the nature of the sensory experience. (The decision between the act/object and adverbial theories is a hard one.)

In the best-known form the adverbial theory of experience proposes that the grammatical object of a statement attributing an experience to someone be analysed as an adverb. For example,

(1) Rod is experiencing a pink square

is rewritten as,

Rod is experiencing (pink square)-ly

this is presented as an alterative to the act/object analysis, according to which the truth of a statement like (1) requires the existence of an object of experience corresponding to its grammatical object. A commitment to the explicit adverbialization of statements of experience is not, however, essential to adverbialism. The core of the theory consists, rather, in the denial of objects of experience, as opposed to objects of perception, and coupled with the view that the role of the grammatical object is a statement of experience is to characterize more fully the sort of experience which is being attributed to the subject. The claim, then, is that the grammatical object is functioning as a modifier, and, in particular, as a modifier of a verb. If this is so, it is perhaps appropriate to regard it as a special kind of adverb at the semantic level.

Nonetheless, in the arranging accordance to the act/object analysis of experience, every experience with content involves an object of experience to which the subject is related by an act of awareness in the event of experiencing that object. Such as these experiences are, it is, nonetheless. The experiences are supposed to be whatever it is that they represent. Act, object theorist may differ on the nature of objects of experience, which h have been treated as properties. However, and, more commonly, private mental objects in which may not exist have any form of being, and, with sensory qualifies the experiencing imagination may walk upon the corpses of times’ generations, but this has also been used as a unique application to is mosaic structure in its terms for objects of sensory experience or the equivalence of the imaginations striving from the mental act as presented by the object and forwarded by and through the imaginistic thoughts that are released of a vexing imagination. Finally, in the terms of representative realism, objects of perception of which we are ‘directly aware’, as the plexuity in the abstract objects of perception exist if objects of experience.

As the aforementioned, traditionally representative realism is allied with the act/object theory. But we can approach the debate or by rhetorical discourse as meant within dialectic awareness, for which representative realism and direct realism are achieved by the mental act in abdication to some notion of regard or perhaps, happiness, all of which the prompted excitations of the notion expels or extractions of information processing. Mackie (1976( argues that Locke (1632-1704) can be read as approaching the debate ion television. My senses, in particular my eyes and ears, ‘tell’ me that Carlton is winning. What makes this possible is the existence of a long and complex causal chain of electro-magnetic radiation from the game through the television cameras, various cables between my eyes and the television screen. Each stage of this process carries information about preceding stages in the sense that the way things are at a given stage depends on the way things are at preceding stages. Otherwise the information would not be transferred from the game to my brain. There needs to be a systematic covariance between the state of my brain and the state unless it obtains between intermediate members of the long causal chain. For instance, if the state of my retina did not systematically remit or consign with the state of the television screen before me, my optic nerve would have, so to speak, nothing to go on to tell my brain about the screen, and so in turn would have nothing to go on to tell my brain about the game. There is no information at a distance’.

A few of the stages in this transmission of information between game and brain are perceptually aware of them. Much of what happens between brain and match I am quite ignorant about, some of what happens I know about from books, but some of what happens I am perceptually aware of the images on the scree. I am also perceptually aware of the game. Otherwise I could not be said to watch the game on television. Now my perceptual awareness of the match depends on my perceptual awareness of the screen. The former goes by means of the latter. In saying this I am not saying that I go through some sort of internal monologue like ‘Such and such images on the screen are moving thus and thus. Therefore, Carlton is attacking the goal’. Indeed, if you suddenly covered the screen with a cloth and asked me (1) to report on the images, and (2) to report in the game. I might well find it easier to report on the game than on the images. But that does not mean that my awareness of the game does not go by way of my awareness of the images on the screen. The shows that I am more interested in the game than in the screen, and so am storing beliefs about it in preference e to beliefs about the screen.

We can now see how elucidated representative realism independently of the debate between act/object and adverbial theorists about sensory experience. Our initial statement of representative realism talked of the information acquired in perceiving an object being most immediately about the perceptual experience caused in us by the object, and only derivatively about objects itself, in the act/object, sense-data approach, what is held to make that true is that the fact that what we are immediately aware of it’s mental sense-datum. But instead, representative realists can put their view this way: Just as awareness of the match game by means of awareness of the screen, so awareness of the screen foes by way of awareness of experience., and in general when subjects perceive objects, their perceptual awareness always does by means of the awareness of experience.

Why believe such a view? Because of the point we referred to earlier: The worldly provision by our senses is so very different from any picture provided by modern science. It is so different in fact that it is hard to grasp what might be meant by insisting that we are in epistemologically direct contact with the world.

An argument from illusion is usually intended to establish that certain familia r facts about illusion disprove the theory of perception and called naïve or direct realism. There are,. However, many different versions of the argument which must be distinguished carefully. Some of these premisses (the nature of the appeal to illusion):Others centre on the interpretation of the conclusion (the kind of direct realism under attack). In distinguishing important differences in the versions of direct realism. One might be taken to be vulnerable to familiar facts about the possibility of perceptual illusion.

A crude statement of direct realism would concede to the connection with perception, such that we sometimes directly perceive physical objects and their properties: We do not always perceive physical objects by perceiving something else, e.g., a sense-data. There are, however, difficulties with this formulation of the view. For one thing a great many philosophers who are not direct realists would admit that it is a mistake to describe people as actually perceiving something other than a physical object. In particular, such philosophers might admit, we should never say that we perceive sense-data. To talk that way would be to suppose that we should model our understanding of our relationship to sense-data on our understanding of the ordinary use of perceptual verbs as they describe our relation to the physical world, and that is the last thing paradigm sense-data theorists should want. At least, many of the philosophers who objected to direct realism would prefer to express what they were objecting to in terms of a technical and philosophical controversial concept such as acquaintance. Using such a notion we could define direct realism this way: In veridical experience we are directly acquainted with parts, e.g., surfaces, or constituents of physical objects. A less cautious version of the view might drop the reference to veridical experience and claim simply that in all parts or constituents of physical objects.

We know things by experiencing them, and knowledge of acquaintance. (Russell changed the preposition to ’by’) is epistemically prior to and has a relatively higher degree of epistemic justification than knowledge about things. Indeed, sensation has ‘the one great value of trueness or freedom from mistake’.

A thought (using that term broadly, to mean any mental state) constituting knowledge of acquaintance with thing is more or less causally proximate to sensations caused by that thing is more or less distant causal y, being separated from the thing and experience of it by processes of attention and inference. At the limit, if a thought is maximally of the acquaintance type, it is the first mental state occurring in a object to which the thought refers, i.e., it is a sensation. The things we have knowledge of acquaintance e include ordinary objects in the external world, such as the Sun.

Grote contrasted the imaginistic thoughts involved in knowledge of acquaintance with things, with the judgements involved in knowledge about things, suggesting that the latter but not the former are contentful mental states. Elsewhere, however, he suggested that every thought capable of constituting knowledge of or about a thing involves a form, idea, or what we might call conceptual propositional content, referring the thought to its object. Whether contentful or not, thoughts constituting knowledge of acquaintance with a thing as r relatively indistinct, although this indistinctness does not imply incommunicability. Yet, thoughts constituting knowledge about a thing are relatively distinct, as a result of ‘the application of notice or attention’ to the ‘confusion or chaos’ of sensation. Grote did not have an explicit theory of reference e, the relation by which a thought of or about a specific thing. Nor did he explain how thoughts can be more or less indistinct.

Helmholtz (1821-94) held unequivocally that all thoughts capable of constituting knowledge, whether ‘knowledge e which has to do with notions’ or ‘mere familiarity with phenomena’ are judgements or, we may say, have conceptual propositional contents. Where Grote saw a difference e between distinct and indistinct thoughts. Helmholtz found a difference between precise judgements which are expressible in words and equally precise judgement which, in principle, are not expressible in words, and so are not communicable.

James (1842-1910), however, made a genuine advance over Grote and Helmholtz by analysing the reference relations holding between a thought and the specific thing of or about which it is knowledge. In fact, he gave two different analyses. On both analyses, a thought constituting knowledge about a thing refers to and is knowledge about ‘a reality, whenever it actually or potentially terminates in’ a thought constituting knowledge of acquaintance with that thing. The two analyses differ in their treatments of knowledge of acquaintance. On James’s first analyses, reference in both sorts of knowledge is mediated by causal chains. A thought constituting pure knowledge of acquaintance with a thing refers to and is knowledge of ‘whatever reality it directly or indirectly operates on and resembles’. The concepts of a thought ‘operating in’ a thing or ‘terminating in’ another thought are causal, but where Grote found chains of efficient causation connecting thought and referent. James found teleology and final causes. On James’s later analysis, the reference involved in knowledge of acquainting e with a thing is direct. A thought constituting knowledge of acquaintance with a thing as a constituent and the thing and the experience of it are identical.

James further agreed with Grote that pure knowledge of acquaintance with things, eg., sensory experience, is epistemically prior to knowledge about things. While the epistemic justification involved in knowledge about all thoughts about things are fallible and their justification is augmented by their mutual coherence. James was unclear about the precise epistemic status of knowledge of acquaintance. At times, thoughts constituting pure knowledge of acquaintance are said to posses ‘absolute veritableness’ and ‘the maximal conceivable truth’, suggesting that such thoughts are genuinely cognitive and that they provide an infallible epistemic foundation. At other times, such thoughts are said not to bear truth-values, suggesting that ‘knowledge’ of acquaintance is not genuine knowledge at all, but only a non-cognitive necessary condition of genuine knowledge, that is to say, the knowledge about things.





What is more, that, Russell (1872-1970) agreed with James that knowledge of things by acquaintance ‘is essentially simpler than any knowledge of truths, and logically independent of knowledge of truth’. That the mental states involved when one is acquainted with things do not have propositional contents. Russell’s reasons were to seem as having been similar to James’s. Conceptually unmediated reference to particulars is necessary for understanding any proposition mentioning a particular and, if scepticism about the external world is to be avoided, some particulars must be directly perceived. Russell vacillated about whether or not the absence of propositional content renders knowledge by acquaintance incommunicable.

Russell agreed with James that different accounts should be given of reference as it occurs in knowledge by acquaintance and in knowledge about things, and that in the former case reference is direct. But, Russell objected on the number of grounds to James’s causal account of the indirect reference involved in knowledge about things. Russell gave a descriptional rather than a causal analysis of that sort of reference. A thought is about a thing when the content of the thought involves a definite description uniquely satisfied by the thing referred to. Yet, he preferred to speak of knowledge of things by description, than of knowledge about things.

Russell advanced beyond Grote and James by explaining how thoughts can be more or less articulate and explicit. If one is acquainted with a complex thing without being aware of or acquainted with its complexity, the knowledge one has by acquaintance e with that thing is vague and inexplicit. Reflection and analysis can lead to distinguish constituent parts of the object of acquaintance and to obtain progressively more distinct, explicit, and complete knowledge about it.

Because one can interpret the reflation of acquaintance or awareness as one that is not epistemic, i.e., not a kind of propositional knowledge, it is important to distinguish the views read as ontological theses from a view one might call epistemological direct realism: In perception we are, on, at least some occasions, non-inferentially justified in believing a proposition asserting the existence e of a physical object. A view about what the object of perceptions are. Direct realism is a type of realism, since it is assumed that these objects exist independently of any mind that might perceive them: And so it thereby rules out all forms of idealism and phenomenalism, which holds that there are no such independently existing objects. Its being a ‘direct realism rules out those views’ defended under the rubic of ‘critical realism’, of ‘representative realism’, in which there is some non-physical intermediary ~ usually called a ‘sense-data’ or a ‘sense impression’ ~ that must first be perceived or experienced in order to perceive the object that exists independently of this perception. According to critical realists, such an intermediary need not be perceived ‘first’ in a temporal sense, but it is a necessary ingredient which suggests to the perceiver an external reality, or which offers the occasion on which to infer the existence of such a reality. Direct realism, however, denies the need for any recourse to mental go-between in order to explain our perception of the physical world.

This reply on the part of the direct realist does not, of course, serve to refute the global sceptic, who claims that, since our perceptual experience could be just as it is without there being any real properties at all, we have no knowledge of any such properties. But no view of perception alone is sufficient to refute such global scepticism. For such a refutation we must go beyond a theory that claims how best to explain our perception of physical objects, and defend a theory that best explains how we obtain knowledge of the world.

All is the equivalent for an external world, as philosophers have used the term, is not some distant planet external to Earth. Nor is the external world, strictly speaking, a world. Rather, the external world consists of all those objects and events which exist external to perceiver. So the table across the room is part of the external world, and so is the room in part of the external world, and so is its brown colour and roughly rectangular shape. Similarly, if the table falls apart when a heavy object is placed on it, the event of its disintegration is a pat of the external world.

One object external to and distinct from any given perceiver is any other perceiver. So, relative to one perceiver, every other perceiver is a part of the external world. However, another way of understanding the external world results if we think of the objects and events external to and distinct from every perceiver. So conceived the set of all perceivers makes up a vast community, with all of the objects and events external to that community making up the external world. Thus, our primary considerations are in the concern from which we will suppose that perceiver are entities which occupy physical space, if only because they are partly composed of items which take up physical space.

What, then, is the problem of the external world. Certainly it is not whether there is an external world, this much is taken for granted. Instead, the problem is an epistemological one which, in rough approximation, can be formulated by asking whether and if so how a person gains of the external world. So understood, the problem seems to admit of an easy solution. Thee is knowledge of the external world which persons acquire primarily by perceiving objects and events which make up the external world.

However, many philosophers have found this easy solution problematic. Nonetheless, the very statement of ‘the problem of the external world itself’ will be altered once we consider the main thesis against the easy solution.

One way in which the easy solution has been further articulated is in terms of epistemological direct realism. This theory is realist in so far as it claims that objects and events in the external world, along with many of their various features, exist independently of and are generally unaffected by perceivers and acts of perception in which they engage. And this theory is epistemologically direct since it also claims that in perception people often, and typically acquire immediate non-inferential knowledge of objects and events in the external world. It is on this latter point that it is thought to face serious problems.

The main reason for this is that knowledge of objects in the external world seems to be dependent on some other knowledge, and so would not qualify as immediate and non-inferentially is claimed that I do not gain immediate non-inferential perceptual knowledge that thee is a brown and rectangular table before me, because I would know such a proposition unless I knew that something then appeared brown and rectangular. Hence, knowledge of the table is dependent upon knowledge of how it appears. Alternately expressed, if there is knowledge of the table at all, it is indirect knowledge, secured only if the proposition about the table may be inferred from propositions about appearances. If so, epistemological direct realism is false’

This argument suggests a new way of formulating the problem of the external world:

Problem of the external world: Can firstly, have knowledge of propositions about objects and events in the external world based on or upon propositions which describe how the external world appears, i.e., upon appearances?

Unlike our original formulation of the problem of the external world, this formulation does not admit of an easy solution. Instead, it has seemed to many philosophers that it admits of no solution at all, so that scepticism regarding the eternal world is only remaining alternative.

This theory is realist in just the way described earlier, but it adds, secondly, that objects and events in the external world are typically directly perceived, as are many of their features such as their colour, shapes, and textures.

Often perceptual direct realism is developed further by simply adding epistemological direct realism to it. Such an addition is supported by claiming that direct perception of objects in the external world provides us with immediate non-referential knowledge of such objects. Seen in this way, perceptual direct realism is supposed to support epistemological direct realism, strictly speaking they are independent doctrines. One might consistently, perhaps even plausibly, hold one without also accepting the other.

Direct perception is that perception which is not dependent on some other perception. The main opposition to the claim that we directly perceive external objects comes from direct or representative realism. That theory holds that whenever an object in the external world is perceived, some other object is also perceived, namely a sensum ~ a phenomenal entity of some sort. Further, one would not perceive the external object if one would not perceive the external object if one were to fail to receive the sensum. In this sense the sensum is a perceived intermediary, and the perception of the external object is dependent on the perception of the sensum. For such a theory, perception of the sensum is direct, since it is not dependent on some other perception, while perception on the external object is indirect. More generally, for the indirect t realism., all directly perceived entities are sensum. On the other hand, those who accept perceptual direct realism claim that perception of objects in the external world is typically direct, since that perception is not dependent on some perceived intermediaries such as sensum.

It has often been supposed, however, that the argument from illusion suffices to refute all forms of perceptual direct realism. The argument from illusion is actually a family of different arguments rather than one argument. Perhaps the most familiar argument in this family begins by noting that objects appear differently to different observers, and even to the same observers on different occasions or in different circumstances. For example, a round dish may appear round to a person viewing it from directly above and elliptical to another viewing it from one side. As one changes position the dish will appear to have still different shapes, more and more elliptical in some cases, closer and closer to round in others . In each such case, it is argued, the observer directly sees an entity with that apparent shape. Thus, when the dish appears elliptical, the observer is said to see directly something which is elliptical. Certainly this elliptical entity is not the top surface of the dish, since that is round. This elliptical entity, a sensum, is thought to be wholly distinct from the dish.

In seeing the dish from straight above it appears round and it might be thought that then directly sees the dish rather than a sensum. But here too, it relatively sett in: The dish will appear different in size as one is placed at different distances from the dish. So even if in all of these cases the dish appears round, it will; also appear to have many different diameters. Hence, in these cases as well, the observer is said to directly see some sensum, and not the dish.

This argument concerning the dish can be generalized in two ways. First, more or less the same argument can be mounted for all other cases of seeing and across the full range of sensible qualities ~ textures and colours in addition to shapes and sizes. Second, one can utilize related relativity arguments for other sense modalities. With the argument thus completed, one will have reached the conclusion that all cases of non-hallucinatory perception, the observer directly perceives a sensum, and not an external physical object. Presumably in cases of hallucination a related result holds, so that one reaches the fully general result that in all cases of perceptual experience, what is directly perceived is a sensum or group of sensa, and not an external physical object, perceptual direct realism, therefore, is deemed false.

Yet, even if perceptual direct realism is refuted, this by itself does not generate a problem of the external world. We need to add that if no person ever directly perceives an external physical object, then no person ever gains immediate non-inferential knowledge of such objects. Armed with this additional premise, we can conclude that if there is knowledge of external objects, it is indirect and based upon immediate knowledge of sensa. We can then formulate the problem of the external world in another way:

Problems of the external world: can, secondly, have knowledge of propositions about objects and events in the external world based upon propositions about directly perceived sensa?

It is worth nothing the differences between the problems of the external world as expounded upon its first premise and the secondly proposing comments as listed of the problems of the external world, we may, perhaps, that we have knowledge of the external world only if propositions about objects and events in the external world that are inferrable from propositions about appearances.

Some philosophers have thought that if analytical phenomenalism were true, the situational causalities would be different. Analytic phenomenalism is the doctrine that every proposition about objects and events in the external world is fully analysable into, and thus is equivalent in meaning to, a group of inferrable propositions . The numbers of inferrable propositions making up the analysis in any single propositioned object and or event in the external world would likely be enormous, perhaps, indefinitely many. Nevertheless, analytic phenomenalism might be of help in solving the perceptual direct realism of which the required deductions propositioned about objects and or events in the external world from those that are inferrable from prepositions about appearances. For, given analytical phenomenalism there are indefinite many in the inferrable propositions about appearances in the analysis of each proposition taken about objects and or events in the external world is apt to be inductive, even granting the truth of a analytical phenomenalism. Moreover, most of the inferrable propositions about appearances into which we might hope to analyse of the external world, then we have knowledge of the external world only if propositions about objects and events in the external world would be complex subjunctive conditionals such as that expressed by ‘If I were to seem to see something red, round and spherical, and if I were to seem to try to taste what I seem to see, then most likely I would seem to taste something sweet and slightly tart’. But propositionally inferrable appearances of this complex sort will not typically be immediately known. And thus knowledge of propositional objects and or event of the external world will not generally be based on or upon immediate knowledge of such propositionally making appearances.

Consider upon the appearances expressed by ‘I seem to see something red, round, and spherical’ and ‘I seem to taste something sweet and slightly tart’. To infer cogently from these propositions to that expressed by ‘There is an apple before me’ we need additional information, such as that expressed by ‘Apples generally cause visual appearance of redness, roundness, and spherical shape and gustatory appearance of sweetness and tartness’. With this additional information., the inference is a good on e, and it is likely to be true that there is an apple there relative to those premiered. The cogency of the inference, however, depends squarely on the additional premise, relative only to the stated inferrability placed upon appearances, it is not highly probable that thee is an apple there.

Moreover, there is good reason to think that analytic phenomenalism is false. For each proposed translation of an object and eventfully external world into the inferrable propositions about appearances. Mainly enumerative induction is of no help in this regard, for that is an inference from premisses about observed objects in a certain set-class having some properties ‘F’ and ‘G’ to unobserved objects in the same set-class having properties ‘F’ and ‘G’, to unobserved objects in the same set-class properties ‘F’ and ‘G’. If satisfactory, then we have knowledge of the external world if propositions are inferrable from propositions about appearances, however, concerned considerations drawn upon appearances while objects and or events of the external world concern for externalities of objects and interactive categories in events, are. So, the most likely inductive inference to consider is a causal one: We infer from certain effects, described by promotional appearances to their likely causes, described by external objects and or event that profited emanation in the concerning propositional state in that they occur. But, here, too, the inference is apt to prove problematic. But in evaluating the claim that inference constitutes a legitimate and independent argument from, one must explore the question of whether it is a contingent fact that, at least, most phenomena have explanations and that be so, that a given criterion, simplicity, were usually the correct explanation, it is difficult to avoid the conclusion that if this is true it would be an empirical fact about our selves in discovery of an reference to the best explanation.

Defenders of direct realism have sometimes appealed to an inference to the best explanation to justify prepositions about objects and or events in the external world, we might say that the best explanation of the appearances is that they are caused by external objects. However, even if this is true, as no doubt it is, it is unclear how establishing this general hypophysis helps justify specific ordination upon the proposition about objects and or event in the external world, such as that these particular appearances of a proposition whose inferrable properties about appearances caused by the red apple.

The point here is a general one: Cogent inductive inference from the inferrable proposition about appearances to propositions about objects and or events in the external world are available only with some added premiss expressing the requisite causal relation, or perhaps some other premiss describing some other sort of correlation between appearances and external objects. So there is no reason to think that indirect knowledge secured if the prepositions about its outstanding objectivity from realistic appearances, if so, epistemological direct realism must be denied. And since deductive and inductive inferences from appearance to objects and or events in the external world are propositions which seem to exhaust the options, no solution to its argument that sustains us of having knowledge of propositions about objects and events in the external world based on or upon propositions which describe the external world as it appears at which point that is at hand. So unless there is some solution to this, it would appear that scepticism concerning knowledge of the external world would be the most reasonable position to take

If the argument leading to some additional premise as might conclude that if there is knowledge of external objects if is directly and based on or upon the immediate knowledge of sensa, such that having knowledge of propositions about objects and or events in the external world based on or upon propositions about directly perceived sensa? Broadly speaking, there are two alternatives to both the perceptual indirect realism, and, of course, perceptual phenomenalism. In contrast to indirect t realism, and perceptual phenomenalism is that perceptual phenomenalism rejects realism outright and holds instead that (1) physical objects are collections of sensa, (2) in all cases of perception, at least one sensa is directly perceived, and, (3) to perceive a physical object one directly perceives some of the sensa which are constituents of the collection making up that object.

Proponents of each of these position try to solve the conditions not engendered to the species of additional persons ever of directly perceiving an external physical object, then no person ever gains immediate non-referential knowledge of such objects in different ways, in fact, if any the better able to solve this additional premise, that we would conclude that if there is knowledge of external objects than related doctrines for which time are aforementioned. The answer has seemed to most philosophers to be ‘no’, for in general indirect realists and phenomenalists have strategies we have already considered and rejected.

In thinking about the possibilities of such that we need to bear in mind that the term for propositions which describe presently directly perceived sensa. Indirect realism typically claim that the inference from its presently directly perceived sensa to an inductive one, specifically a causal inference from effects of causes. Inference of such a sort will perfectly cogent provides we can use a premiss which specifies that physical objects of a certain type are causally correlated with sensa of the sort currently directly perceived. Such a premiss will itself be justified, if at all, solely on the basis of propositions described presently directly perceived sensa. Certainly for the indirect realist one never directly perceives the causes of sensa. So, if one knows that, say, apples topically cause such-and-such visual sensa, one knows this only indirectly on the basis of knowledge of sensa. But no group of propositionally perceived sensa by itself supports any inferences to causal correlations of this sort. Consequently, indirect realists are in no p position to solve such categorically added premises for which knowledge is armed with additional premise, as containing of external objects , it is indirect and based on or upon immediate knowledge of sensa. The consequent solution of these that are by showing that propositions would be inductive and causal inference from effects of causes and show inductively how derivable for propositions which describe presently perceived sensa.

Phenomenalists have often supported their position, in part, by noting the difficulties facing indirect t realism, but phenomenalism is no better off with respect to inferrable prepositions about objects and events responsible for unspecific appearances. Phenomenalism construe physical objects as collections of sensa. So, to infer an inference from effects to causes is to infer a proposition about a collection from propositions about constituent members of the collective one, although not a causal one. Nonetheless, namely the inference in question will require a premise that such-and-such directly perceived sensa are constituents of some collection ‘C’, where ‘C’ is some physical object such as an apple. The problem comes with trying to justify such a premise. To do this, one will need some plausible account of what is mean t by claiming that physical objects are collections of sensa. To explicate this idea, however, phenomenalists have typically turned to analytical phenomenalism: Physical objects are collections of sensa in the sense that propositions about physical objects are analysable into propositions about sensa. And analytical phenomenalism we have seen, has been discredited.

If neither propositions about appearances or propositions accorded of the external world can be easily solved, then scepticism about external world is a doctrine we would be forced to adopt. One might even say that it is here that we locate the real problem of the external world. ‘How can we avoid being forced into accepting scepticism’?

In avoiding scepticism, is to question the arguments which lead to both propositional inferences about the external world an appearances. The crucial question is whether any part of the argument from illusion really forces us to abandon the incorporate perceptual direct realism. To help see that the answer is ‘no’ we may note that a key premise in the relativity argument links how something appears with direct perception: The fact that the dish appears elliptical is supposed to entail that one directly perceives something which is elliptical. But is there an entailment present? Certainly we do not think that the proposition expressed by ‘The book appears worn and dusty and more than two hundred years old’ entails that the observer directly perceives something which is worn and dusty and more than two hundred years old. And there are countless other examples like this one, where we will resist the inference from a property ‘F’ appearing to someone to claim that ‘F’ is instantiated in some entity.

Proponents of the argument from illusion might complain that the inference they favour works only for certain adjectives, specifically for adjectives referring to non-relational sensible qualities such as colour, taste, shape, and the like. Such a move, however, requires an arrangement which shows why the inference works in these restricted cases and fails in all others. No such argument has ever been provided, and it is difficult to see what it might be.

If the argument from illusion is defused, the major threat facing a knowledge of objects and or events in the external world primarily by perceiving them. Also, its theory is realist in addition that objects and events in the external world are typically directly perceived as are many of their characteristic features. Hence, there will no longer be any real motivation for it would appear that scepticism concerning knowledge of the external world would be the most reasonable position to take. Of course, even if perceptual directly realism is reinstated, this does not solve, by any means, the main reason for which that knowledge of objects in the external world seem to be dependent on some other knowledge, and so would not qualify as immediate and non-reference, along with many of their various features, exist independently of and are generally unaffected by perceivers and acts of perception in which they engage. That problem might arise even for one who accepts perceptual direct realism. But, there is reason to be suspicious in keeping with the argument that one would not know that one is seeing something blue if one failed to know that something looked blue. In this sense, there is a dependance of the former on the latter, what is not clear is whether the dependence is epistemic or semantic. It is the latter if, in order to understand what it is to see something blue, one must also understand what it is fort something to look blue. This may be true, even when the belief that one is seeing something blue is not epistemically dependent on or based upon the belief that something looks blue. Merely claiming, that there is a dependent relation does not discriminate between epistemic and semantic dependence. Moreover, there is reason to think it is not an epistemic dependence. For in general, observers rarely have beliefs about how objects appar, but this fact doe not impugn their knowledge that they are seeing, e.g., blue objects.

Human history is in essence a history of ideas, as thoughts are distinctly intellectual and stresses contemplation and reasoning. Justly as language is the dress of thought. Ideas began with Plato, as eternal, mind-independent forms or archetypes of the things in the material world. Neoplatonism made them thoughts in the mind of God who created the world. The much criticized ‘new way of ideas’, so much a part of seventeenth and eighteenth-century philosophy, began with Descartes’ (1596-1650) conscionable extension of ideas to cover whatever is in human minds too, an extension of which Locke (1632-1704) made much use. But are they like mental images, of things outside the mind, or non-representational, like sensations? If representational, are they mental objects, standing between the mind and what they represent, or are they mental acts and modifications of a mind perceiving the world directly? Finally, are they neither objects nor mental acts, but dispositions? Malebranche (1632-1715) and Arnauld (1612-94), and then Leibniz, famously disagreed about how ‘ideas’ should be understood, and recent scholars disagree about how Arnauld, Descartes, Locke and Malebranche in fact understood them.





We could derive a scientific understanding of these ideas with the aid of precise deduction, as Descartes continued his claim that we could lay the contours of physical reality out in three-dimensional co-ordinates. Following the publication of Isaac Newton’s ‘Principia Mathematica’ in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. The dream that we could know and master the entire physical world through the extension and refinement of mathematical theory became the central feature and principals of scientific knowledge.

Since scientists, during the nineteenth century were engrossed with uncovering the workings of external reality and seemingly knew of themselves that these virtually overflowing burdens of nothing, in that were about the physical substrates of human consciousness, the business of examining the distributive contribution in dynamic functionality and structural foundation of mind became the province of social scientists and humanists. Adolphe Quételet proposed a ‘social physics’ that could serve as the basis for a new discipline called sociology, and his contemporary Auguste Comte concluded that a true scientific understanding of the social reality was quite inevitable. Mind, in the view of these figures, was a separate and distinct mechanism subject to the lawful workings of a mechanical social reality.

If what exists in the mind as a representation (as of something comprehended) or as a formulation (as of a plan) absorbs in the apprehensions toward belief. That is, ‘ideas’, as eternal, mind-independent forms or archetypes of the things in the material world. Neoplatonism made them thoughts in the mind of God who created the world. The much criticized ‘new way of ideas’, so much a part of seventeenth-and eighteenth-century philosophy, began with Descartes’ conscious extension of ‘idea’ to cover whatever is in human minds too, an extension, of which, Locke made much use. Nevertheless, are they like mental images, of things outside the mind, or non-representational, like sensations? If representation as standing between the mind and what they represent, or are they acts and modifications of a mind perceiving the world directly? Finally, are they neither objects nor acts, but dispositions? Malebanche and Arnauld and Leibniz, disagreed about how ‘ideas’ should be understood. This deducibility where each individual's property, that its completed concept is due too there being an ontological correlate for its completion, or in other words a modification of the substances individual correspondence to each truth about it. Recent scholars disagree about how Arnauld, Descartes, Locke and Malebranche in fact understood them.

Contemporary philosophy of mind, following cognitive science, uses the term ‘representation’ to mean just about anything that can be semantically evaluated. Thus, representations may be said to be true, to refer, to be accurate, and so forth. Representation thus conceived comes in many varieties. The most familiar are pictures, three-dimensional models, e.g., statues, scale model, linguistic text (including mathematical formulas) and various hybrids of these such as diagrams, maps, graphs and tables. It is an open question in cognitive science whether mental representation, which is our real topic, but when it falls within any of these or any-other familiar provinces.

The representational theory of cognition and thought is uncontroversial in contemporary cognitive science that cognitive processes are processes that manipulate representations. This idea seems nearly inevitable. What makes the difference between processes that are cognitive-solving a problem, say and those that are not-a patellar reflexes, for example-is just that cognitive processes are epistemically assessable? A solution procedure can be justified or correct, as a reflex cannot. Since only things with content can be epistemically assessed, processes appear to count as cognitive only in as far as they implicate representations.

It is tempting to think that thoughts are the mind’s representations: Are not thoughts just those mental states that have semantic content? This is, no doubt, harmless enough provided us keep in mind that cognitive science may be characterized by to some thoughts to properties of contents that are foreign too commonsense. First, of these harmless thought properties exist of seems a foreign country, and, after all, they do things differently there. Most of the representations hypothesized by cognitive science do not correspond to anything commonsensical, as would it make out as or perceive to be something previously known. Of what integrative imperatives is directly the line to interconnectivity. The merging - in the mind - or, the external perceptions of something new to knowledge, is, usually already possessed as thought. The explanatory capabilities converging to simplifying the applicability, for which considerations would account for the discrepancies focussed 'interiorly'. As, too, are the interpretative and individualized interpretations, showing that these possibilities that impart information are given hold, in, or, at least, initially, through the existing in or belonging to an individual inherently. Standard psycholinguistic theory, for instance, hypothesizes the construction of representations of the syntactic structures of the utterances one hears and understands. Yet we are not aware of, and nonspecialists do not even understand, the structures represented. Thus, cognitive science may attribute thoughts where common sense would not. Second, cognitive science may find it useful to individuate thoughts in ways foreign to common sense.

However, concepts of action presuppose the propositional attitudes, of course, in a sense, the claim that the concept originates from observing the patterns of those discerning acquirements that the concept has in reserve to propositional-attitude concepts. If so, the existence of the patterns can hardly cause our proposition-attitude concepts. So, the behavioural account of the attitudes would be no more successful than the pattern's attributions to and for of these opposed propositional-attitude concepts, are these patterns revealed to us at all. It is, nonetheless, that the concepts occupy mental states having content: A belief may have the content that I will catch the train, or a hope may have the content that the prime minister will resign. A concept is something that can be a constituent of such contents. More specifically, a concept is a way of thinking of something-a particular object, or property, or relation, or another entity.

Several different concepts may each be ways of thinking of the same object. A person may think of himself in the first-person way, or think of himself as the spouse of Mary Smith, or as the person in a certain room now. More generally, a concept ‘c’ is such-and-such, without believing ‘d’ is such-and-such. As words can be combined to form structured sentences, concepts have also been conceived as combinable into structured complex contents. When these complex contents are expressed in English by ‘that . . . ‘ clauses, as in our opening examples, they could be true or false, depending on the way the world is.

Concepts are to be distinguished from stereotypes and from conceptions. The stereotypical spy may be a middle-level official down on his luck and in need of money. Nonetheless, we can come to learn that Anthony Blunt, art historian and Surveyor of the Queen’s Pictures are secret agents: We can come to believe that something falls under a concept while positively disbelieving that the same thing falls under the stereotype associated with the concept. Similarly, a person’s conception of a just arrangement for resolving disputes may objectivise the view to oppose by arguing against something like contemporary Western legal systems. However, whether or not it would be correct, rejecting this conception by arguing that it does not adequately provide for the elements of fairness is quite intelligible for someone. Also, it does not involve the responsibility that must be taken in the respect with which are required by the concept of justice.

A fundamental question for philosophy may hold: What individuates a given concept-that is, what makes it the one it is, than any other concept? One answer, which has been developed in great detail, is that giving a non-trivial answer to this question is impossible (Schiffer, 1987). An alternative approach, favoured by most, addresses that questionable indication by way of starting from the idea that a concept is individuated by the condition that must be satisfied. If, on the other hand, a thinker is to poses that concept and, in its gross effect, being capable to adhere of having beliefs and other contributing attributes whose contents contain it as a constituent. So, to take a simple case, one could propose that the logical concept ‘and’ is individuated by this condition: It is the unique concept ‘C’ to posses that a thinker has to find these forms of inference compelling, without basing them on any further inference or information: From any two premisses ‘A’ and ‘B’, ‘ABC’ can be inferred, and from any premiss ‘ABC’, and that beyond a normal or acceptable limit as to evaluate in excessive amounts. The exclusion or exception of any condition than that was objectable for being of the ordinary exemption, to be free from requirements or the state of being free or freed from a charge or obligation to which others are subject. As to say from each of all A's and all B’s can be implicitly implied by an unexpressed and wordless understanding. Again, an observational concept such as ‘round’ can be individuated in part by stating that the thinker finds specified contents containing it. The compelling certainty in the assorted kinds in descriptions of perception, and in part by relating those judgements containing the intellection as existing or dealing with what exists only in the mind as an 'ideational' concept is not based on perception. The judgements that are truth-statement which individuates a concept by saying what are required for a thinker to poses it can be described as giving the ‘possession condition’ for the concept.

A possession condition for a particular concept may actually use that concept. The possession condition for ‘and’ does not. We can also expect to use observational concepts in specifying the kind of experiences, least of mention, to which have to be made in defence of the possession conditions for observational concepts. What we must avoid is mention of the concept in question as such within the content of the attributes attributed to the thinker in the possession condition. Otherwise we would be presupposed possession of the concept in an account that was meant to elucidate its possession. In talking of what the thinker finds compelling, the possession conditions can also respect an insight of the later Wittgenstein: That a thinker’s mastery of a concept is inextricably tied to how he finds it natural to go on in new cases in applying the concept.

Sometimes a family of concepts has this property: Mastering any one member of the family without mastering the others is not possible. Two of the families that plausibly have this status are these: The families consisting of some simple concepts as found to, 0, 1, 2, . . . of the natural numbers and the corresponding concepts of numerical quantifiers there are 0, so-and-sos. Its efficience is contained by 1, so-and-so, . . . traditionally as a group of persons of or regarded as of common ancestry, wherefore consisting of the concepts ‘belief’ and ‘desire’. Such families have become known as ‘local holism’. A local holism does not prevent the individuation of a concept by its possession condition. Comparatively, it demands that all the concepts in the family be individuated simultaneously. So one would say something of this form: Belief and desire form the unique pair of concepts C1 and C2 such that for a thinker to progressive position their possessions of them are to meet such-and-such conditions involving the thinkers, C1 and C2. For these and other possession conditions to individuate properly, it is necessary that there be some ranking of the concept treated. The possession conditions for concepts higher in the ranking must presuppose only possession of concepts at the same or lower levels in the ranking.

A possession condition may in various way's make a thinker’s possession of a particular concept dependent on or upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinker’s perceptual experience. Perceptual experience represents the world for being a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to the subject’s environment. If this is so, then mention of such experiences in a possession condition will make possession of that concept dependent in part upon the environmental relations to the thinker. Burge (1979) has also argued from intuitions about particular examples that, though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary if the thinker’s social environment is varied. A possession condition that properly individuates such a concept must take into account his linguistic relations.

Concepts have a normative dimension, a fact strongly emphasized by Kripke. For any judgement whose content involves a given concept, there is a ‘correctness condition’ for that judgement, a condition that is dependent in part on or upon the identity of the concept. The normative character of concepts also extends into the territory of a thinker’s reasons for making judgements. A thinker’s visual perception can give him good reason for judging ‘That man is bald’; even if the man he sees is Rostropovich. All these normative connections must be explained by a theory of concepts. One approach to these matters is to look to the possession condition for a concept, and consider how the referent of the concept is fixed from it, with the world. One proposal is that the referent of the concept is that object, or property, or function . . . which makes the practices of judgement and inference in the possession condition always lead to true judgements and truth-preserving inferences. This proposal would explain why certain reasons are necessarily good reasons for judging given contents. Provided the possession condition permits us to say what it is about a thinker’s previous judgements that make it the case that he is employing one concept than another, this proposal would also have another virtue. It would also allow us to say how the correctness condition is determined for a judgement in which the concept is applied to newly encountered objects. The judgement is correct if the new object had the property that in fact makes the judgement practices in the possession condition yield true judgements, or truth-preserving inferences.

What is more, which innate ideas have been variously defined by philosophers either ideas consciously made in the prevailing presence of to the mind or the inclining inclinations to be aware, mindful of the ever-changing social scene. Nonetheless, these elements or complex of elements in an individual that feels, perceives, thinks, wills, and especially reasons, all of which, are anterior to sense experience. However, the dispositional sense, or as ideas that we have an innate disposition to form, though we need not be actually aware of them at any particular time, e.g., as babies - the dispositional sense.

Understood in either way they were invoked to account for our recognition, in that certain truths without recourse to experiential truths are without recourse verification. Such as those of mathematics, or justify certain moral and religious claims held to be capably known by introspection of our innate ideas. Examples of such supposed truths might include ‘murder is wrong’ or ‘God exists’.

One difficulty with the doctrine is that it is sometimes formulated as one about concepts or ideas held to be innate and at other times as one about a source of propositional knowledge. In as far as concepts are taken to be innate, the doctrine relates primarily ti claim about meaning: Our idea of God, for example, is taken as a source for the meaning of the word God. When innate ideas are understood propositionally, that it is supposed that innateness is taken as evidence for their truth. However, this clearly rests the assumption that innate prepositions have an unimpeachable source, usually taken to be God, but then any appeal to innate ideas to justify the existence of God is circular. Despite such difficulties the doctrine of innate ideas had a long and influential history until the eighteenth century and the concept has in recent decades been revitalized through its employment in Noam Chomsky’s influential account of the mind’s linguistic capabilities.

The attraction of the theory has been felt strongly by those philosophers who have been unable to give an alternative account of our capacity to recognize that some proposition cannot be justified solely based on an appeal to sense experience. Thus Plato argued that, for example, recognition of mathematical truths could only be explained on the assumption of some form of recollection. Since there were no plausible post-natal sources for which the recollection must refer to the prenatal acquisition of knowledge. Thus understood, the doctrine of innate ideas supposed the views that there were important truths innate in human beings and the senses hindered their proper apprehension.

The ascetic implications of the doctrine were important in Christian philosophy throughout the Middle Ages and the doctrine featured powerfully in scholastic teaching until its displacement by Locke’s philosophy in the eighteenth century. It had meanwhile acquired modern expression in the philosophy of Descartes who argued that we can come to know certain important truths before we have any empirical knowledge at all. Our ideas of God, for example, and our coming to recognize that God must necessarily exist, are, Descartes held, logically independent of sense experience. In England the Cambridge Plantonists such as Henry More and Ralph Cudworth added considerable support.

Locke’s rejection of innate ideas and his alternative empiricist account was powerful enough to displace the doctrine from philosophy y almost totally. Leibniz, in his critique of Locke, attempted to defend it with a sophisticated dispositional version of the theory, but it attracted few followers.

The empiricist alternative to innate ideas as an explanation of the certainty of propositions was in the direction of construing all necessary truths as analytic. Kant’s refinement of the classification of propositions with the fourfold distinction, analytic/synthetic and a priori/a posteriori did nothing to encourage a return to the innate idea's doctrine, which slipped from view. The doctrine may fruitfully be understood as the production of confusion between explaining the genesis of ideas or concepts and the basis for regarding some propositions as necessarily true.

Nevertheless, according to Kant, our knowledge arises from two fundamentally different faculties of the mind, sensibility and understanding, Kant criticized his predecessors for running these faculties together, as in Leibniz for treating comprehensibility as a confused mode of understanding and Locke for treating understanding as an abstracted mode of sense perception. Kant held that each faculty operates with its own distinctive type of mental representation. Concepts, the instruments of the understanding, are mental representations that apply potentially to many things in virtue of their possession of a common feature. Intuitions, the instrument of sensibility, are representation s that refer to just one thing and to that thing is played in Russell’s philosophy by ‘acquaintance’ though intuition's objects are given to us, Kant said; through concepts they are thought.

Nonetheless, it is celebrated, that for a Kantian Thesis that knowledge is yielded neither by intuitions nor by concepts alone, but only by the two in conjunction, ‘Thoughts without content are empty’, he says in an often quoted remark, and ‘intuitions without concepts are blind’. Exactly what Kant means by the remark is a debated question, however, answered in different ways by scholars who bring different elements of Kant’s text to bear on it. A minimal reading is that it is only propositionally structured knowledge that requires the collaboration of intuition and concept: This view allows that intuitions without concepts constitute some kind of nonjudgmental awareness. A stronger reading is that it is reference or intentionality that depends on intuition and concept together, so that the blindness of intuition without concept is its referring to an object. A greater diverseness in fundamental extremes that one who favours rapidly and sweeping changes takes the position of 'insurrectionist': The subversive radical view of what is revealed to the vision or can be seen is yet intuitivistic but without concepts seem indeterminate, or just a mere blur, perhaps nothing at all. This last interpretation, though admittedly suggested by some things Kant says, is at odds with his official view about the separation of the faculties.

Least that ‘content’ has become a technical term in philosophy for whatever it is a representation had that makes it semantically evaluable. Wherefore, a statement is sometimes said to have a proposition or truth condition as its content, whereby its term is sometimes said to have a concept as it s content. Much less is known about how to characterize the contents of nonlinguistic representations than is known about characterizing linguistic representations. ‘Content’ is a term precisely because it allows one to abstract away from questions about what semantic properties representations have: A representation’s content is just whatever it is underwrite s its semantic evaluation.

According to most epistemologists, knowledge entails belief, so that I cannot know that such and such is the case unless I believe that such and such am the case. Others think this entailment thesis can be rendered more accurately if we substitute for belief some closely related attitude. For instance, several philosophers would prefer to say that knowledge entail psychological certainty (Prichard, 1950; Ayer, 1956) or the self-assuring, self-confident composites shown by feeling or showing of adequacy and reliance on oneself and one’s powers, is that of one’s capability in the enabling ability which implicates a resulting dependence on or upon the serenity of convictions (Lehrer, 1974) or (with) approving favours of fancy to take or sustain without protest or agreeing (to or with) losing, one must accept the declination as such is to be the acceptable satisfactions for which is of our own acceptance (Lehrer, 1989). Nonetheless, there are arguments against all versions of the thesis that knowledge requires having a belief-like attitude toward the known. These arguments are given by philosophers who think that knowledge and belief, or a facsimile, are mutually incompatible (the incompatibility thesis), or by ones who say that knowledge does not entail belief, or vice versa. In so, that it is made known openly to exist without the other, but, the two may also coexist of the separability thesis.

The incompatibility thesis is sometimes traced to Plato in view of his claim that knowledge is infallible while belief or opinion is fallible (Republic). Nonetheless this claim would not support the thesis. Belief might be a component of an infallible form of knowledge in spite of the fallibility of belief. Perhaps knowledge involves some factor that compensates for the fallibility of belief.

A.Duncan-Jones 1938 and Vendler, 1978, cite linguistic evidence to back up the incompatibility thesis. He notes that people often say ‘I' do not believe she is guilty. I know she is, however, this ‘just’ makes it especially clear that the speaker is signalling that she has something more salient than mere belief, not that she has something inconsistent with belief, namely knowledge. Compare: ‘You did not hurt him, you killed him’.

H.A.Prichard (1966) offers a defence of the incompatibility thesis that hinges on the equation of knowledge with certainty, as both infallibility and psychological certitude give the assumption that when we believe in the truth of a claim we are not certain about its truth. Given that knowledge never does, believing something rules out the possibility of knowing it. Unfortunately, Prichard gives us no-good reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, only to suggest that we are completely confident is bizarre.

A.D.Woozley (1953) defends a version of the separability thesis. Woozley’s version that deals with psychological certainty rather than belief, whereas knowledge can exist without confidence about the item known, although knowledge might also be accompanied by confidence as well. Woozley remarks that the test of whether I know something is ‘what I can do, where what I can do may include answering questions’. Based on this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people who give correct responses on examinations even if those people show no confidence in their answers. Woozley acknowledges, however, that it would be odd for those who lack confidence to claim knowledge. It would be peculiar to say, ‘I am unsure whether my answer is true, still, I know it s correct’. Nonetheless, this tension Woozley explains using a distinction between conditions under which we are justified in making a claim, such as a claim to know something, and conditions under which the claim we make is true. While ‘I know such and such’ might be true even if I am sure of whether such and such unless I were sure of the truth of my claim.

To a greater extent formal European philosophers, such as Immanuel Kant, sought to reconcile representations of external reality in mind with the motions of matter-based on the dictates of pure reason. This impulse was also apparent in the utilitarian ethics of Jerry Bentham and John Stuart Mill, in the historical materialism of Karl Marx and Friedrich Engels, and in the pragmatism of Charles Smith, William James and John Dewey. These thinkers were painfully aware, however, of the inability of reason to posit a self-consistent basis for bridging the gap between mind and matter, and each remains obliged to conclude that the realm of the mental exists only in the subjective reality of the individual

A distinctive yet peculiar presence that has been in waiting to the future, its foundational frame of a proposal to a new understanding of relationships between mind and world, within the larger context of the history of mathematical physics, the origin and extensions of the classical view of the fundamentals of scientific knowledge, and the various ways that physicists have attempted to prevent previous challenges to the efficacy of classical epistemology.

There is no solid or functional basis in contemporary physics or biology for believing in the stark Cartesian division between mind and world that some have moderately described as ‘the disease of the Western mind’. The dialectic orchestrations will serve as background for understanding a new relationship between parts and wholes in physics, with a similar view of that relationship that has emerged in the co-called ‘new biology’ and in recent studies of the evolution of a scientific understanding to a more conceptualized representation of ideas, and includes its allied ‘content’.

Nonetheless, it seems a strong possibility that Platonic and Whitehead connect upon the issue of the creation of the sensible world may by looking at actual entities as aspects of nature’s contemplation. The contemplation of nature is obviously an immensely intricate affair, involving a myriad of possibilities, therefore one can look at actual entities as, in some sense, the basic elements of a vast and expansive process.

We could derive a scientific understanding of these ideas with the aid of precise deduction, as Descartes continued his claim that we could lay the contours of physical reality out in three-dimensional co-ordinates. Following the publication of Isaac Newton’s “Principia Mathematica” in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. The dream that we could know and master the entire physical world through the extension and refinement of mathematical theory became the central feature and principals of scientific knowledge.

The fatal flaw of pure reason is, of course, the absence of emotion, and purely explanations of the division between subjective reality and external reality, of which had limited appeal outside the community of intellectuals. The figure most responsible for infusing our understanding of the Cartesian dualism with contextual representation of our understanding with emotional content was the death of God theologian Friedrich Nietzsche 1844-1900. After declaring that God and ‘divine will’, did not exist, Nietzsche reified the ‘existence’ of consciousness in the domain of subjectivity as the ground for individual ‘will’ and summarily reducing all previous philosophical attempts to articulate the ‘will to truth’. The dilemma, forth in, had seemed to mean, by the validation, . . . as accredited for doing of science, in that the claim that Nietzsche’s earlier versions to the ‘will to truth’, disguises the fact that all alleged truths were arbitrarily created in the subjective reality of the individual and are expressed or manifesting the individualism of ‘will’.

In Nietzsche’s view, the separation between mind and matter is more absolute and total than previously been imagined. To serve as a basis on the assumptions that there are no really imperative necessities corresponding in common to or in participated linguistic constructions that provide everything needful, resulting in itself, but not too far as to distance from the influence so gainfully employed, that of which was founded as close of action, wherefore the positioned intent to settle the occasioned-difference may that we successively occasion to occur or carry out at the time after something else is to be introduced into the mind, that from a direct line or course of circularity inseminates in its finish. Their successive alternatives are thus arranged through anabatic existing or dealing with what exists only in the mind, so that, the conceptual analysis of a problem gives reason to illuminate, for that which is fewer than is more in the nature of opportunities or requirements that employ something imperatively substantive, moreover, overlooked by some forming elementarily whereby the gravity held therein so that to induce a given particularity, yet, in addition by the peculiarity of a point as placed by the curvilinear trajectory as introduced through the principle of equivalence, there, founded to the occupied position to which its order of magnitude runs a location of that which only exists within self-realization and corresponding physical theories. Ours’ being not rehearsed, however, unknowingly their extent temporality extends the quality value for purposes that are substantially spatial, as circulatorial situates the point indirectly into the navigatable reasons for self-momentum as explicated through space and time.

Exceeding in something otherwise that extends beyond its greatest equilibria, and to the highest degree, as in the sense of the embers sparking aflame into some awakening state, whereby our capable abilities to think-through the estranged dissimulations by which of inter-twirling composites, it’s greater of puzzles lay withing the thickening foliage that lives the labyrinthine maze, in that sense and without due exception, only to be proven done. By some compromise, or formally subnormal surfaces of typically free all-knowing calculations, are we in such a way, that from underneath that comes upon those of some untold stories of being human. These habituating and unchangeless and, perhaps, incestuous desires for its action’s lay below the conscious struggle into the further gaiting steps of their pursuivants endless latencies, that we are drawn upon such things as their estranging dissimulations of arranging simulation, of which time and again we appear not to any but the separately subsequent rationalizations in corresponding to known facts as discovered in the real reason attributed by realism, however, in human subjectivity as ingrained of some external reality, may that be deducibly subtractive, but, that, if in at all, that we but locked in ‘a prison house of language’. The prison as he concluded it, was also a ‘space’ where the philosopher can examine the ‘innermost desires of his nature’ and articulate a new message of individual existence founded on ‘will’.

Those who fail to enact their existence in this space, Nietzsche says, are enticed into sacrificing their individuality on the nonexistent altars of religious beliefs and democratic or socialists’ ideals and become, therefore, members of the anonymous and docile crowd. Nietzsche also invalidated the knowledge claims of science in the examination of human subjectivity. Science, he said. Is not exclusive to natural phenomenons and favours reductionistic examination of phenomena at the expense of mind? It also seeks to reduce the separateness and uniqueness of mind with mechanistic descriptions that disallow and basis for the free exercise of individual will.

Nietzsche’s emotionally charged defence mounted to or relate of all centralized controls by one autocratic leader or party considered for being infallible, with which apprehend the valuing cognation for which is self-removed by the underpinning conditions of substantive intellectual freedom and radial empowerment of mind as the maker and transformer of the collective fictions. Of all concerning properties have to do with internal itemizations, that a pretentious content of something as real or true of human reality having brought throughout a soulless mechanistic universe has proven terribly influential on twentieth-century thought. Furthermore, Nietzsche sought to reinforce his view of the subjective character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. Through a curious course of events, attempted by Edmund Husserl 1859-1938, a German mathematician and a principal founder of phenomenology, wherefor to resolve this crisis resulted in a view of the character of consciousness that closely resembled that of Nietzsche.

The well-known disciple of Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existentialist Jean-Paul Sartre. The work of Husserl, Heidegger, and Sartre became foundational to that of the principal architects of philosophical postmodernism, and deconstructionist Jacques Lacan, Roland Barthes, Michel Foucault and Jacques Derrida. It obvious attribution of a direct linkage between the nineteenth-century crisis about the epistemological foundations of mathematical physics and the origin of philosophical postmodernism served to perpetuate the Cartesian two-world dilemma in an even more oppressive form. It also allows us better to understand the origins of cultural ambience and the ways in which they could resolve that conflict.

The mechanistic paradigm in the late nineteenth century brought forth an accountability toward which relativity came to know when Einstein studied physics. Most physicists believed that it represented an eternal truth, but Einstein was open to fresh ideas. Inspired by Mach’s critical mind, he demolished the Newtonian ideas of space and time and replaced them with new, “relativistic” notions.

Two theories unveiled and unfolding as their phenomenal yield held by Albert Einstein, attributively appreciated that the special theory of relativity ( 1905 ) and, also the tangling and calculably arranging affordance, as drawn upon the gratifying nature whom by encouraging the finding resolutions upon which the realms of its secreted reservoir in continuous phenomenons, in additional the continuatives as afforded by the efforts by the imagination were made discretely available to any the unsurmountable achievements, as remain obtainably afforded through the excavations underlying the artifactual circumstances that govern all principle ‘forms’ or ‘types’ in the involving evolutionary principles of the general theory of relativity (1915 ), wherefore, the special theory gives a unified account of the laws of mechanics and of electromagnetism, including optics. Before 1905 the purely relative nature of uniform motion had in part been recognized in mechanics, although Newton had considered time to be absolute and postulated absolute space.

If the universe is a seamlessly interactive system that evolves to a higher level of complexity, and if the lawful regularities of this universe are emergent properties of this system, we can assume that the cosmos is a singular point of significance as a whole that evinces the ‘progressive principal order’ of complementary relations its parts. Given that this whole exists in some sense within all parts (quanta), one can then argue that it operates in self-reflective fashion and is the ground for all emergent complexities. Since human consciousness evinces self-reflective awareness in the human brain and since this brain, like all physical phenomena can be viewed as an emergent property of the whole, it is reasonable to conclude, in philosophical terms at least, that the universe is conscious.

But since the actual character of this seamless whole cannot be represented or reduced to its parts, it lies, quite literally beyond all human representations or descriptions. If one chooses to believe that the universe be a self-reflective and self-organizing whole, this lends no support whatsoever to conceptions of design, meaning, purpose, intent, or plan associated with any mytho-religious or cultural heritage. However, If one does not accept this view of the universe, there is nothing in the scientific descriptions of nature that can be used to refute this position. On the other hand, it is no longer possible to argue that a profound sense of unity with the whole, which has long been understood as the foundation of religious experience, which can be dismissed, undermined or invalidated with appeals to scientific knowledge.

Misleading of issues surrounding certainty is especially connected with those concerning ‘scepticism’. Although Greek scepticism entered on the value of enquiry and questioning, scepticism is now the denial that knowledge or even rational belief is possible, either about some specific subject-matter, e.g., ethics, or in any area whatsoever. Classical scepticism, springs from the observation that the best methods in some area seem to fall short of giving us contact with the truth, e.g., there is a gulf between appearances and reality, it frequently cites the conflicting judgements that our methods deliver, with the result that questions of truth-realizations become disintegrations of the undefinable. In classic thought the various examples of this conflict were systemized in the tropes of Aenesidemus. So that, the scepticism of Pyrrho and the new Academy was a system of argument and inasmuch as opposing dogmatism, and, particularly the philosophical system building of the Stoics.

For many sceptics have traditionally held that knowledge requires certainty, artistry. And, of course, they affirm of having being such beyond doubt that knowledge is not feigned to possibilities. In part, nonetheless, of the principle that every effect it’s a consequence of an antecedent cause or causes. For causality to be true it is not necessary for an effect to be predictable as the antecedent causes may be numerous, too complicated, or too interrelated for analysis. Nevertheless, in order to avoid scepticism, this participating sceptic has generally held that knowledge does not require certainty, except for alleged cases of things that are evident for one just by being true. It has often been thought, that any thing known must satisfy certain criteria as well as true. It is often taught that anything is known must satisfy certain standards. In so saying, that by ‘deduction’ or ‘induction’, there will be criteria specifying when it is. As these alleged cases of self-evident truths, the general principle specifying the sort of consideration that will make such standard in the apparent or justly conclude in accepting it warranted to some degree.

Besides, there is another view - the absolute globular view that we do not have any knowledge whatsoever. In whatever manner, it is doubtful that any philosopher seriously entertains of absolute or the completed consummation of scepticism. Even the Pyrrhonist sceptics, who held that we should refrain from accenting to any non-evident standards that no such hesitancy about asserting to ‘the evident’, the non-evident are the apparent manifestations of any belief that requires evidences because it is warranted.

René Descartes (1596-1650), in his sceptical guise, never doubted the content of his own ideas. It’s challenging logic, inasmuch as of whether they ‘corresponded’ to anything beyond ideas.

All and all, Pyrrhonism and Cartesian conduct regulated by an appearance of something as distinguished from which it is made of nearly global scepticism. Having been held and defended, that of assuming that knowledge is some form of true, sufficiently warranted belief, it is the warranted condition that provides the truth or belief conditions, in that of providing the grist for the sceptic’s mill about. The Pyrrhonist will call to mind that no non-evident, empirically deferring the sufficiency of giving in but warranted. Whereas, a Cartesian sceptic will agree that no inductive standard about anything other than one’s own mind and its contents is sufficiently warranted, because there are always legitimate grounds for doubting it. Accordingly, the essential difference between the two views concerns the stringency of the requirements for a belief being sufficiently warranted to take account of as knowledge.

A Cartesian requires certainty, but a Pyrrhonist merely requires that the standards in case are more warranted then its negation.

Cartesian scepticism was unduly an in fluence with which Descartes agues for scepticism, than his reply holds, in that we do not have any knowledge of any empirical standards, in that of anything beyond the contents of our own minds. The reason is roughly in the position that there is a legitimate doubt about all such standards, only because there is no way to justifiably deny that our senses are being stimulated by some sense, for which it is radically different from the objects which we normally think, in whatever manner they affect our senses. Therefrom, if the Pyrrhonist is the agnostic, the Cartesian sceptic is the atheist.

Emerging sceptic tendencies came forward in the 14th-century writings from Nicholas of Autrecourt, wherein his criticisms of any certainty were beyond the immediate deliverance of the senses and basic logic, and in particular of any knowledge of either intellectual or material substances, anticipate the later scepticism of Balye and Hume. The latter too distinguished between Pyrrhonistic and excessive scepticism, which he regarded as unlivable, and the more mitigated scepticism that accepts every day or commonsense beliefs (not as the delivery of reason, but as due more to custom and habit), but is duly wary of the power of reason to give us much more. Mitigated scepticism is thus closer to the attitude fostered by ancient scepticism from Pyrrho through to Sexus Empiricus. Although the phrase Cartesian scepticism is sometimes used, Descartes himself was not a sceptic, but in the method of doubt, uses a sceptical scenario in order to begin the process of finding a secure mark of knowledge. Descartes himself trusts a category of clear and distinct ideas, not far removed from the phantasia kataleptiké of the Stoics.

Scepticism should not be confused with relativism, which is a doctrine about the nature of truth, and may be motivated by trying to avoid scepticism. Nor is it identical with eliminativism, which counsels abandoning an area of thought together, not because we cannot know the truth, but because there are no truths capable of being framed in the terms we use.

Descartes theory of knowledge starts with the quest for certainty, for an indubitable starting-point or foundation on the basis alone of which progress is possible. This is eventually founded in the celebrated phraseological statement: Cogito ergo sums ~ I think: therefore? I am. By locating the point of certainty in my own awareness of my own self, Descartes gives a first-person twist to the theory of knowledge that dominated them following centuries in spite of various counter-attacks on behalf of social and public starting-points. The metaphysical associations with which this itemized priority is the famous Cartesian dualism, or separation of mind and matter into Bi-corrective formalities of which each is structurally0 different, yet both are interactive substances. Descartes rigorously and rightly sees that it takes divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invokes a clear and distinct perception of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: as Hume drily puts it, to have recourse to the veracity of the supreme Being, in order to prove the veracity of our senses, is surely making a very unexpected circuit.

In his own time Descartes conception of the entirely separate substance of the mind was recognized to give rise to insoluble problems of the nature of the causal connexion between the two. It also gives rise to the problem, insoluble in its own terms, of other minds. Descartes notorious denial that non-human animals are conscious is a stark illustration of the problem. In his conception of matter Descartes also gives preference to rational cogitation over anything derived from the senses. Since we can conceive of the matter of a ball of wax surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature. Descartes thought, as reflected in Leibniz, that the qualities of sense experience have no resemblance to qualities of things, so that knowledge of the external world is essentially knowledge of structure rather than of filling. On this basis Descartes erects a remarkable physics. Since matter is in effect the same as extension there can be no empty space or void, since there is no empty space motion is not a question of occupying previously empty space, but is to be thought of in terms of vortices (like the motion of a liquid).

Although the structure of Descartes epistemology, theories of mind, and theory of matter have ben rejected many times, their relentless exposures of the hardest issues, their exemplary clarity, and even their initial plausibility, all contrive to make him the central point of reference for modern philosophy.

The self-conceived, as Descartes presents it in the first two Meditations: Aware only of its own thoughts, and capable of disembodied existence, neither situated in a space nor surrounded by others. This is the pure self of I-ness that we are tempted to imagine as a simple unique thing that make up our essential identity. Descartes views that he could keep hold of this nugget while doubting everything else is criticized by Lichtenberg and Kant, and most subsequent philosophers of mind.

Descartes holds that we do not have any knowledge of any empirical proposition about anything beyond the contents of our own minds. The reason, roughly put, is that there is a legitimate doubt about all such propositions because there is no way to deny justifiably that our senses are being stimulated by some cause (an evil spirit, for example) which is radically different from the objects that we normally think affect our senses.

He also points out, that the senses (sight, hearing, touch, etc., are often unreliable, and it is prudent never to trust entirely those who have deceived us even once, he cited such instances as the straight stick that looks ben t in water, and the square tower that look round from a distance. This argument of illusion, has not, on the whole, impressed commentators, and some of Descartes contemporaries pointing out that since such errors become known as a result of further sensory information, it cannot be right to cast wholesale doubt on the evidence of the senses. But Descartes regarded the argument from illusion as only the first stage in a softening up process which would lead the mind away from the senses. He admits that there are some cases of sense-base belief about which doubt would be insane, e.g., the belief that I am sitting here by the fire, wearing a winter dressing gown.

Descartes was to realize that there was nothing in this view of nature that could explain or provide a foundation for the mental, or from direct experience as distinctly human. In a mechanistic universe, he said, there is no privileged place or function for mind, and the separation between mind and matter is absolute. Descartes was also convinced, that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led him to invent algebraic geometry.

A scientific understanding of these ideas could be derived, admitted as valid, said Descartes, with the aid of precise deduction, and also claimed that the contours of physical reality could be laid out in three-dimensional coordinates. Following the publication of Newton’s Principia Mathematica in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. And the dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and guiding principle of scientific knowledge.

Having to its recourse of knowledge, its central questions include the origin of knowledge, the place of experience in generating knowledge, and the place of reason in doing so, the relationship between knowledge and certainty, and between knowledge and the impossibility of error, the possibility of universal scepticism, and the changing forms of knowledge that arise from new conceptualizations of the world. All of these issues link with other central concerns of philosophy, such as the nature of truth and the natures of experience and meaning.

Foundationalism was associated with the ancient Stoics, and in the modern era with Descartes (1596-1650). Who discovered his foundations in the clear and distinct ideas of reason? Its main opponent is coherentism, or the view that a body of propositions mas be known without a foundation in certainty, but by their interlocking strength, than as a crossword puzzle may be known to have been solved correctly even if each answer, taken individually, admits of uncertainty. Difficulties at this point led the logical passivists to abandon the notion of an epistemological foundation altogether, and to flirt with the coherence theory of truth. It is widely accepted that trying to make the connexion between thought and experience through basic sentences depends on an untenable myth of the given.

Still in spite of these concerns, the problem was, of course, in defining knowledge in terms of true beliefs plus some favoured relations between the believer and the facts that began with Platos view in the Theaetetus, that knowledge is true belief, and some logos. Due of its nonsynthetic epistemology, the enterprising of studying the actual formation of knowledge by human beings, without aspiring to certify those processes as rational, or its proof against scepticism or even apt to yield the truth. Natural epistemology would therefore blend into the psychology of learning and the study of episodes in the history of science. The scope for external or philosophical reflection of the kind that might result in scepticism or its refutation is markedly diminished. Despite the fact that the terms of modernity are so distinguished as exponents of the approach include Aristotle, Hume, and J. S. Mills.

The task of the philosopher of a discipline would then be to reveal the correct method and to unmask counterfeits. Although this belief lay behind much positivist philosophy of science, few philosophers now subscribe to it. It places too well a confidence in the possibility of a purely previous first philosophy, or viewpoint beyond that of the work ones way of practitioners, from which their best efforts can be measured as good or bad. These standpoints now seem that too many philosophers may be too fanciful, that the more modest of tasks are actually adopted at various historical stages of investigation into different areas and with the aim not so much of criticizing, but more of systematization. In the presuppositions of a particular field at a particular classification, there is still a role for local methodological disputes within the community investigators of some phenomenon, with one approach charging that another is unsound or unscientific, but logic and philosophy will not, on the modern view, provide any independent arsenal of weapons for such battles, which often come to seem more like factional recommendations in the ascendancy of a discipline.

This is an approach to the theory of knowledge that sees an important connexion between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge processed through some natural selection process, the best example of which is Darwin’s theory of biological natural selection. There is a widespread misconception that evolution proceeds according to some plan or direct, but it has neither, and the role of chance ensures that its future course will be unpredictable. Random variations in individual organisms create tiny differences in their Darwinian fitness. Some individuals have more offsprings than others, and the characteristics that increased their fitness thereby become more prevalent in future generations. Once upon a time, at least a mutation occurred in a human population in tropical Africa that changed the haemoglobin molecule in a way that provided resistance to malaria. This enormous advantage caused the new gene to spread, with the unfortunate consequence that sickle-cell anaemia came to exist.

Given that chance, it can influence the outcome at each stage: First, in the creation of genetic mutation, second, in whether the bearer lives long enough to show its effects, thirdly, in chance events that in the beginning the inquisitions for which situated further are the factorial infringements given to their influence or constituting start by the characterized individuals objectification as drawn on or upon the state or fact of having independent reality that recently has come into existence, especially his given reproductive success, and, in whether a gene even if favoured in one generation, is, happenstance, is rendered resignation in the next, and finally in the many unpredictable environmental changes that will undoubtedly occur in the history of any group of organisms. As Harvard biologist Stephen Jay Gould has so vividly expressed that process over again, the outcome would surely be different. Not only might there not be humans, there might not even be anything like mammals.

We will often emphasis the elegance of traits shaped by natural selection, but the common idea that nature creates perfection needs to be analysed carefully. The extent to which evolution achieves perfection depends on exactly what you mean. If you mean Does natural selections always take the best path for the long-term welfare of a species? The answer is no. That would require adaption by group selection, and this is, unlikely. If you mean Does natural selection creates every adaption that would be valuable? The answer again, is no. For instance, some kinds of South American monkeys can grasp branches with their tails. The trick would surely also be useful to some African species, but, simply because of bad luck, none have it. Some combination of circumstances started some ancestral South American monkeys using their tails in ways that ultimately led to an ability to grab onto branches, while no such development took place in Africa. Mere usefulness of a trait does not necessitate a means in that what will understandably endure phylogenesis or evolution.

This is an approach to the theory of knowledge that sees an important connexion between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge proceeds through some natural selection process, the best example of which is Darwin’s theory of biological natural selection. The three major components of the model of natural selection are variation selection and retention. According to Darwin’s theory of natural selection, variations are not pre-designed to do certain functions. Rather, these variations that do useful functions are selected. While those that do not employ of some coordinates in that are regainfully purposed are also, not to any of a selection, as duly influenced of such a selection, that may have responsibilities for the visual aspects of variational intentionally occurs. In the modern theory of evolution, genetic mutations provide the blind variations: Blind in the sense that variations are not influenced by the effects they would have-the likelihood of a mutation is not correlated with the benefits or liabilities that mutation would confer on the organism, the environment provides the filter of selection, and reproduction provides the retention. Fatnesses are achieved because those organisms with features that make them less adapted for survival do not survive in connexion with other organisms in the environment that have features that are better adapted. Evolutionary epistemology applies this blind variation and selective retention model to the growth of scientific knowledge and to human thought processes overall.

The parallel between biological evolution and conceptual or epistemic evolution can be seen as either literal or analogical. The literal version of evolutionary epistemology deeds biological evolution as the main cause of the growth of knowledge. On this view, called the evolution of cognitive mechanic programs, by Bradie (1986) and the Darwinian approach to epistemology by Ruse (1986), that growth of knowledge occurs through blind variation and selective retention because biological natural selection itself is the cause of epistemic variation and selection. The most plausible version of the literal view does not hold that all human beliefs are innate but rather than the mental mechanisms in which of following their guide, the acquisition has achieved or by attainment of non-innate beliefs, are that belief in the act of assenting intellectually to something proposed as true or the state of mind of one who so assents in offering the ready belief to anyone accredited of faith, however, to believe is simply to have a firm conviction in the reality of something, say that they believe in ghosts, so that they are themselves innately governed by principles adhered in themselves and the result of evolutionary biological natural selection. Ruse, (1986) demands of a version of literal evolutionary epistemology that he links to sociolology (Rescher, 1990).

On the analogical version of evolutionary epistemology, called the evolution of theory’s program, by Bradie (1986). The Spenserians approach (after the nineteenth century philosopher Herbert Spencer) by Ruse (1986), the development of human knowledge is governed by a process analogous to biological natural selection, rather than by an instance of the mechanism itself. This version of evolutionary epistemology, introduced and elaborated by Donald Campbell (1974) as well as Karl Popper, sees the [partial] fit between theories and the world as explained by a mental process of trial and error known as epistemic natural selection.

Both versions of evolutionary epistemology are usually taken to be types of naturalized epistemology, because both take some empirical facts as a starting point for their epistemological project. The literal version of evolutionary epistemology begins by accepting evolutionary theory and a materialist approach to the mind and, from these, constructs an account of knowledge and its developments. In contrast, the metaphorical version does not require the truth of biological evolution: It simply draws on biological evolution as a source for the model of natural selection. For this version of evolutionary epistemology to be true, the model of natural selection need only apply to the growth of knowledge, not to the origin and development of species. Crudely put, evolutionary epistemology of the analogical sort could still be true even if Creationism is the correct theory of the origin of species.

Although they do not begin by assuming evolutionary theory, most analogical evolutionary epistemologists are naturalized epistemologists as well, their empirical assumptions, least of mention, implicitly come from psychology and cognitive science, not evolutionary theory. Sometimes, however, evolutionary epistemology is characterized in a seemingly non-naturalistic fashion. Campbell (1974) says that if one is expanding knowledge beyond what one knows, one has no choice but to explore without the benefit of wisdom, i.e., blindly. This, Campbell admits, makes evolutionary epistemology close to being a tautology (and so not naturalistic). Evolutionary epistemology serves to assert the analytic claim that when existing in or based on fact, in that of something that has existence, is the basis for which the actualization of whatever is apprehended as having actual, distinct, and demonstrable existence, for that which can be known as having existence in space or time as that happens or takes place in expanding ones knowledge beyond what one knows. One must proceed to something that is already known, but, more interestingly, it also makes the synthetic claim that when expanding ones knowledge beyond what one knows, one must proceed by blind variation and selective retention. This claim is synthetic because it can be empirically falsified. The central claim of evolutionary epistemology is synthetic, not analytic. If the central contradictory, which they are not. Campbell is right that evolutionary epistemology does have the analytic feature he mentions, but he is wrong to think that this is a distinguishing feature, since any plausible epistemology has the same analytic feature (Skagestad, 1978).

Two extraordinary issues lie to awaken the literature that involves questions about realism, i.e., What metaphysical commitment does an evolutionary epistemologist have to make? Progress, i.e., according to evolutionary epistemology, does knowledge develop toward a goal? With respect to realism, many evolutionary epistemologists endorse that is called hypothetical realism, a view that combines a version of epistemological scepticism and tentative acceptance of metaphysical realism. With respect to progress, the problem is that biological evolution is not goal-directed, but the growth of human knowledge seems to be. Campbell (1974) worries about the potential dis-analogy here but is willing to bite the stone of conscience and admit that epistemic evolution progress toward a goal (truth) while biologic evolution does not. Many another has argued that evolutionary epistemologists must give up the truth-topic sense of progress because a natural selection model is in essence, is non-teleological, as an alternative, following Kuhn (1970), and embraced in the accompaniment with evolutionary epistemology.

Among the most frequent and serious criticisms levelled against evolutionary epistemology is that the analogical version of the view is false because epistemic variation is not blind (Skagestad, 1978), and (Ruse, 1986) including, (Stein and Lipton, 1990) all have argued, nonetheless, that this objection fails because, while epistemic variation is not random, its constraints come from heuristics that, for the most part, are selective retention. Further, Stein and Lipton come to the conclusion that heuristics are analogous to biological pre-adaptions, evolutionary pre-biological pre-adaptions, evolutionary cursors, such as a half-wing, a precursor to a wing, which have some function other than the function of their descendable structures: The function of descendable structures, the function of their descendable character embodied to its structural foundations, is that of the guidelines of epistemic variation is, on this view, not the source of disanalogousness, but the source of a more articulated account of the analogy.

Many evolutionary epistemologists try to combine the literal and the analogical versions (Bradie, 1986, and Stein and Lipton, 1990), saying that those beliefs and cognitive mechanisms, which are innate results from natural selection of the biological sort and those that are innate results from natural selection of the epistemic sort. This is reasonable as long as the two parts of this hybrid view are kept distinct. An analogical version of evolutionary epistemology with biological variation as its only source of blondeness would be a null theory: This would be the case if all our beliefs are innate or if our non-innate beliefs are not the result of blind variation. An appeal to the legitimate way to produce a hybrid version of evolutionary epistemology since doing so trivializes the theory. For similar reasons, such an appeal will not save an analogical version of evolutionary epistemology from arguments to the effect that epistemic variation is blind (Stein and Lipton, 1990).

Although it is a new approach to theory of knowledge, evolutionary epistemology has attracted much attention, primarily because it represents a serious attempt to flesh out a naturalized epistemology by drawing on several disciplines. In science is relevant to understanding the nature and development of knowledge, then evolutionary theory is among the disciplines worth a look. Insofar as evolutionary epistemology looks there, it is an interesting and potentially fruitful epistemological programme.

What makes a belief justified and what makes a true belief knowledge? Thinking that whether a belief deserves one of these appraisals is natural depends on what caused the depicted branch of knowledge to have the belief. In recent decades a number of epistemologists have pursued this plausible idea with a variety of specific proposals. Some causal theories of knowledge have it that a true belief that ‘p’ is knowledge just in case it has the right causal connexion to the fact that ‘p’, such a criterion can be applied only to cases where the fact that p is a sort that can reach causal relations, as this seems to exclude mathematically and their necessary facts and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually supposed that it is limited to perceptual representations where knowledge of particular facts about subjects environments.

For example, Armstrong (1973), predetermined that a position held by a belief in the form: This perceived object is ‘F’ is, non-inferential knowledge if and only if the belief is a completely reliable sign that the perceived object is ‘F’, that is, the fact that the object is ‘F’ contributed to causing the belief and its doing so depended on the properties of the believer such that the laws of nature dictated for any subject ‘χ’ and perceived object ‘y’, if ‘χ’ has those properties and believed that ‘y’ is ‘F’. Then ‘y’ is ‘F’, merely by its deriving of a conclusion by reasoning, wherefore, the answer was obtainable by inference. (Dretske (1981) offers a rather similar account, in terms of the beliefs being caused by a signal received by the perceiver that carries the information that the object is ‘F’).

Goldman (1986) has proposed an importantly different causal criterion, namely, that a true belief is knowledge if it is produced by a type of process that is globally and locally reliable. Causing true beliefs is sufficiently high is globally reliable if its propensity. Local reliability has to do with whether the process would have produced a similar but false belief in certain counterfactual situations alternative to the actual situation. This way of marking off true beliefs that are knowledge does not require the fact believed to be causally related to the belief, and so it could in principle apply to knowledge of any kind of truth.

Goldman requires the global reliability of the belief-producing process for the justification of a belief, he requires it also for knowledge because justification is required for knowledge. What he requires for knowledge, but does not require for justification is local reliability. His idea is that a justified true belief is knowledge if the type of process that produced it would not have produced it in any relevant counterfactual situation in which it is false. Its purported theory of relevant alternatives can be viewed as an attempt to provide a more satisfactory response to this tension in our thinking about knowledge. It attempts to characterize knowledge in a way that preserves both our belief that knowledge is an absolute concept and our belief that we have knowledge.

According to the theory, we need to qualify rather than deny the absolute character of knowledge. We should view knowledge as absolute, reactive to certain standards (Dretske, 1981 and Cohen, 1988). That is to say, in order to know a proposition, our evidence need not eliminate all the alternatives to that preposition, rather for us, that we can know our evidence for its rendering surrender in all the significantly relevant alternatives, where the set of relevant alternatives (a proper subset of the set of all alternatives) is determined by some standard. Moreover, according to the relevant alternatives view, and the standards determining that of the alternatives is raised by the sceptic are not relevant. If this is correct, then the fact that our evidence cannot eliminate the sceptics alternative does not lead to a sceptical result. For knowledge requires only the elimination of the relevant alternatives, so the relevant alternative view preserves in both strands in our thinking about knowledge. Knowledge is an absolute concept, but because the absoluteness is relative to a standard, we can know many things.

The interesting thesis that counts as a causal theory of justification (in the meaning of causal theory intended here) is that: A belief is justified in case it was produced by a type of process that is globally reliable, that is, its propensity to produce true beliefs-that can be defined (to a good approximation) As the proportion of the beliefs it produces (or would produce) that is true is sufficiently great.

This proposal will be adequately specified only when we are told (I) how much of the causal history of a belief counts as part of the process that produced it, (ii) which of the many types to which the process belongs is the type for purposes of assessing its reliability, and (iii) relative to why the world or worlds are the reliability of the process type to be assessed the actual world, the closet worlds containing the case being considered, or something else? Let us look at the answers suggested by Goldman, the leading proponent of a reliabilist account of justification.

(1) Goldman (1979, 1986) takes the relevant belief producing process to include only the proximate causes internal to the believer. So, for instance, when believing that the telephone was ringing the process that produced the belief, for purposes of assessing reliability, includes just the causal chain of neural events from the stimulus in my ears inward and other brain alterations on which the production of the belief depended: It does not include any events in the telephone, or the sound waves travelling between it and my ears, or any earlier decisions made, that was responsible for being within hearing distance of the telephone at that time. It does seem intuitively plausible of a belief depends should be restricted to internal oneness proximate to the belief. Why? Goldman does not tell us. One answer that some philosophers might give is that it is because some beliefs being justified at a given time can depend only on facts directly accessible to the believers awareness at that time (for, if a believer ought to holds only beliefs that are justified, she can tell at any given time what beliefs would then be justified for her). However, this cannot be Goldman’s answer because he wishes to include in the relevantly process neural events that are not directly accessible to consciousness.

(2) Once the reliabilist has told us how to delimit the process producing a belief, he needs to tell us which of the many types to which it belongs is the relevant type. Coincide, for example, the process that produces your believing that you see a book before you. One very broad type to which that process belongs would be specified by coming to a belief as to something one perceives as a result of activation of the nerve endings in some sensory-organs. A constricted type, with which of unvarying processes belong would be specified by coming to a belief as to what one sees as a result of activation of the nerve endings in ones retinas. A still narrower type would be given by inserting in the last specification a description of a particular pattern of activation of the retinas particular cells. Which of these or other types to which the token process belongs is the relevant type for determining whether the type of process that produced your belief is reliable?

If we select a type that is too broad, as having the same degree of justification various beliefs that intuitively seem to have different degrees of justification. Thus the broadest type we specified for your belief that you see a book before you apply also to perceptual beliefs where the object seen is far away and seen only briefly is less justified. On the other hand, is we are allowed to select a type that is as narrow as we please, then we make it out that an obviously unjustified but true belief is produced by a reliable type of process. For example, suppose I see a blurred shape through the fog far in a field and unjustifiedly, but correctly, believe that it is a sheep: If we include enough details about my retinal image is specifying te type of the visual process that produced that belief, we can specify a type is likely to have only that one instanced and is therefore 100 percent reliable. Goldman conjectures (1986) that the relevant process type is the narrowest type that is casually operative. Presumably, a feature of the process producing beliefs were causally operatives in producing it just in case some alternative feature instead, but it would not have led to that belief. We need to say some here rather than any, because, for example, when I see an oak or maple tree, the particular like-minded material bodies of my retinal image are causally clear towards the worked in producing my belief that what is seen as a tree, even though there are alternative shapes, for example, oak or maples, ones that would have produced the same belief.

(3) Should the justification of a belief in a hypothetical, non-factual example of the belief-producing of or in process to the possible world of the example? That leads to the implausible result in that in a world run by a Cartesian demon-a powerful being who causes the other inhabitants of the world to have rich and careened sets of perceptual and memory impressions that are all illusory the perceptual and memory beliefs of the other inhabitants are all unjustified, for they are produced by processes that are, in that world, quite unreliable. If we say instead that it is the reliability of the processes in the actual world that matters, we get the equally undesired result that if the actual world is a demon world then our perceptual and memory beliefs are all unjustified.

Goldman’s solution (1986) is that the reliability of the process types is to be gauged by their performance in normal worlds, that is, worlds consistent with our general beliefs about the world . . . about the sorts of objects, events and changes that occur in it. This gives the intuitively right results for the problem cases just considered, but indicate by inference an implausible proportion of making compensations for alternative tending toward justification. If there are people whose general beliefs about the world are very different from mine, then there may, on this account, be beliefs that I can correctly regard as justified (ones produced by processes that are reliable in what I take to be a normal world) but that they can correctly regard as not justified.

However, these questions about the specifics are dealt with, and there are reasons for questioning the basic idea that the criterion for some beliefs being justified is its being produced by a reliable process. Thus and so, doubt about the sufficiency of the reliabilist criterion is prompted by a sort of example that Goldman himself uses for another purpose. Suppose that being in brain-state (B) always causes one to believe that one is in brained-state (B). Here the reliability of the belief-producing process is perfect, but we can readily imagine circumstances in which a person goes into grain-state B and therefore has the belief in question, though this belief is by no means justified (Goldman, 1979). Doubt about the necessity of the condition arises from the possibility that one might know that one has strong justification for a certain belief and yet that knowledge is not what actually prompts one to believe. For example, I might be well aware that, having read the weather bureaus weekend forecast, that it will be much hotter tomorrow. I have ample reason to be confident that it will be hotter tomorrow, but I irrationally refuse to believe it until Wally tells me that he feels in his joints that it will be hotter tomorrow. Here what prompts me to believe dors not justify my belief, but my belief is nevertheless justified by my knowledge of the weather bureaus prediction and of its evidential force: I can attend to the deriving of a conclusion by reasoning in whatever is obtainable by inference, but any disavowable of reckoning a determination arrived at by such reasoning is forthwith a wrong inference so based of an incomplete set or kinds of evidence or conclusion. Inference that is charged in the commitment that I had better not to be holding the belief. Indeed, given my justification and that there is nothing untoward about the weather bureau’s prediction, my belief, if true, can be counted knowledge. This sorts of example raises doubt whether any causal conditions, are it a reliable process or something else, is necessary for either justification or knowledge.

Philosophers and scientists alike, have often held that the simplicity or parsimony of a theory is one reason, all else being equal, to view it as true. This goes beyond the unproblematic idea that simpler theories are easier to work with and gave greater aesthetic appeal.

One theory is more parsimonious than another when it postulates fewer entities, processes, changes or explanatory principles: The simplicity of a theory depends on essentially the same consecrations, though parsimony and simplicity obviously become the same. Demanding clarification of what makes one theory simpler or more parsimonious is plausible than another before the justification of these methodological maxims can be addressed.

If we set this description problem to one side, the major normative problem is as follows: What reason is there to think that simplicity is a sign of truth? Why should we accept a simpler theory instead of its more complex rivals? Newton and Leibniz thought that the answer was to be found in a substantive fact about nature. In Principia, Newton laid down as his first Rule of Reasoning in Philosophy that nature does nothing in vain . . . for Nature is pleased with simplicity and affects not the pomp of superfluous causes. Leibniz hypothesized that the actual world obeys simple laws because Gods taste for simplicity influenced his decision about which world to actualize.

The tragedy of the Western mind, described by Koyré, is a direct consequence of the stark Cartesian division between mind and world. We discovered the certain principles of physical reality, said Descartes, not by the prejudices of the senses, but by the light of reason, and which thus possess so great evidence that we cannot doubt of their truth. Since the real, or that which genuinely is existent to the externalization to ourselves, was in his view only that which could be represented in the quantitative terms of mathematics, Descartes concludes that all quantitative aspects of reality could be traced to the deceitfulness of the senses.

The most fundamental aspect of the Western intellectual tradition is the assumption that there is a fundamental division between the material and the immaterial world or between the realm of matter and the realm of pure mind or spirit. The metaphysical farmwork based on this assumption is known as ontological dualism. As the word dual implies, the framework is predicated on or upon ontology, or a conception of the nature of God or Being, that assumes reality has two distinct and separable dimensions. The concept of Being as continuous, immutable, and having a prior or separate existence from the world of change dates from the ancient Greek philosopher Parmenides. The same qualities were associated with the God of the Judeo-Christian tradition, and they were considerably amplified by the role played in Theology by Platonic and Neoplatonic philosophy.

Nicolas Copernicus, Galileo, Johannes Kepler, and Isaac Newton were all inheritors of a cultural tradition in which ontological dualism was a primary article of faith. Hence the idealization of the mathematical ideal as a source of communion with God, which dates from Pythagoras, provided a metaphysical foundation for the emerging natural sciences. This explains why, the creators of classical physics believed that doing physics was a form of communion with the geometrical and mathematical form’s resident in the perfect mind of God. This view would survive in a modified form in what is now known as Einsteinian epistemology and accounts in no small part for the reluctance of many physicists to accept the epistemology associated with the Copenhagen Interpretation.

At the beginning of the nineteenth century, Pierre-Simon LaPlace, along with a number of other French mathematicians, advanced the view that the science of mechanics constituted a complete view of nature. Since this science, by observing its epistemology, had revealed itself to be the fundamental science, the hypothesis of God was, they concluded, entirely unnecessary.

LaPlace is recognized for eliminating not only the theological component of classical physics but the entire metaphysical component as well. The epistemology of science requires, he said, that we are by manoeuvring procedures, as this peculiarity implicates the series of actions, operations, or motions involved in the accomplishment of an end. Nonetheless, by inductive generalizations from observed facts to hypotheses that are tested by observed conformity of the phenomena. What was unique about LaPlaces view of hypotheses was his insistence that we cannot attribute reality to them. Although concepts like force, mass, motion, cause, and laws are obviously present in classical physics, they exist in LaPlaces view only as quantities. Physics is concerned, he argued, with quantities that we associate as a matter of convenience with concepts, and the truth about nature are only the quantities.

As this view of hypotheses and the truth of nature as quantities were extended in the nineteenth century to a mathematical description of phenomena like heat, light, electricity, and magnetism. LaPlaces assumptions about the actual character of scientific truth seemed correct. This progress suggested that if we could remove all thoughts about the nature of or the source of phenomena, the pursuit of strictly quantitative concepts would bring us to a complete description of all aspects of physical reality. Subsequently, figures like Comte, Kirchhoff, Hertz, and Poincaré developed a program for the study of nature that was quite different from that of the original creators of classical physics.

The seventeenth-century view of physics as a philosophy of nature or as natural philosophy was displaced by the view of physics as an autonomous science that was the science of nature. This view, which was premised on the doctrine of positivism, promised to subsume all of the nature with a mathematical analysis of entities in motion and claimed that the true understanding of nature was revealed only in the mathematical description. Since the doctrine of positivism assumes that the knowledge we call physics resides only in the mathematical formalism of physical theory, it disallows the prospect that the vision of physical reality revealed in physical theory can have any other meaning. In the history of science, the irony is that positivism, which was intended to banish metaphysical concerns from the domain of science, served to perpetuate a seventeenth-century metaphysical assumption about the relationship between physical reality and physical theory.

Epistemology since Hume and Kant has drawn back from this theological underpinning. Indeed, the very idea that nature is simple (or uniform) has come in for a critique. The view has taken hold that a preference for simple and parsimonious hypotheses is purely methodological: It is constitutive of the attitude we call scientific and makes no substantive assumption about the way the world is.

A variety of otherwise diverse twentieth-century philosophers of science have attempted, in different ways, to flesh out this position. Two examples must suffice here: Hesse (1969) as, for summaries of other proposals. Popper (1959) holds that scientists should prefer highly falsifiable (improbable) theories: He tries to show that simpler theories are more falsifiable, also Quine (1966), in contrast, sees a virtue in theories that are highly probable, he argues for a general connexion between simplicity and high probability.

Both these proposals are global. They attempt to explain why simplicity should be part of the scientific method in a way that spans all scientific subject matters. No assumption about the details of any particular scientific problem serves as a premiss in Popper or Quine's arguments.

Newton and Leibniz thought that the justification of parsimony and simplicity flows from the hand of God: Popper and Quine try to justify these methodologically median of importance is without assuming anything substantive about the way the world is. In spite of these differences in approach, they have something in common. They assume that all users of parsimony and simplicity in the separate sciences can be encompassed in a single justifying argument. That recent developments in Confirmational theory suggest that this assumption should be scrutinized. Good (1983) and Rosenkrantz (1977) has emphasized the role of auxiliary assumptions in mediating the connexion between hypotheses and observations. Whether a hypothesis is well supported by some observations, or whether one hypothesis is better supported than another by those observations, crucially depends on empirical background assumptions about the inference problem here. The same view applies to the idea of prior probability (or, prior plausibility). In of a single hypo-physical science if chosen as an alternative to another even though they are equally supported by current observations, this must be due to an empirical background assumption.

Principles of parsimony and simplicity mediate the epistemic connexion between hypotheses and observations. Perhaps these principles are able to do this because they are surrogates for an empirical background theory. It is not that there is one background theory presupposed by every appeal to parsimony; This has the quantifier order backwards. Rather, the suggestion is that each parsimony argument is justified only to each degree that it reflects an empirical background theory about the subjective matter. On this theory is brought out into the open, but the principle of parsimony is entirely dispensable (Sober, 1988).

This local approach to the principles of parsimony and simplicity resurrects the idea that they make sense only if the world is one way rather than another. It rejects the idea that these maxims are purely methodological. How defensible this point of view is, will depend on detailed case studies of scientific hypothesis evaluation and on further developments in the theory of scientific inference.

It is usually not found of one and the same that, an inference is a (perhaps very complex) act of thought by virtue of which act (1) passing from a set of one or more propositions or statements to a proposition or statement and (2) it appears that the latter are true if the former is or are. This psychological characterization has for itself the occurring to mind through with a wider addition of literature under fewer than inessential variations. Desiring a better characterization of inference is natural. Yet attempts to do so by constructing a fuller psychological explanation fail to comprehend the grounds on which inference will be objectively valid. A point in time evaporatively made by Gottlob Frége, attempts to understand the nature of inference through the device of the representation of inference by formal-logical calculations or derivations better (1) that we are left dumbfounded about the relation of formal-logical derivations to the informal inferences they are supposedly to represent or reconstruct, and (2) leaves us worried about the sense of such formal derivations. Are these derivations inference? Are not informal inferences needed in order to apply the rules governing the constructions of formal derivations (inferring that this operation is an application of that formal rule)? These are concerns cultivated by, for example, Wittgenstein.

Coming up with an adequate characterized inferences, and even working out what would count as a very adequate characterization here is demandingly by no means nearly some resolved philosophical problem.

Traditionally, a proposition that is not a conditional, as with the affirmative and negative, modern opinion is wary of the distinction, since what appears categorical may vary with the choice of a primitive vocabulary and notation. Apparently categorical propositions may also turn out to be disguised conditionals: ‘X’ is intelligent (categorical?) Equivalent, if ‘X’ is given a range of tasks, she does them better than many people (conditional?). The problem is not merely one of classification, since deep metaphysical questions arise when facts that seem to be categorical and therefore solid, come to seem by contrast conditional, or purely hypothetical or potential.

Its condition of some classified necessity is so proven sufficient that if ‘p’ is a necessary condition of ‘q’, then ‘q’ cannot be true unless ‘p’, is true? If ‘p’ is a sufficient condition, thus steering well is a necessary condition of driving in a satisfactory manner, but it is not sufficient, for one can steer well but drive badly for other reasons. Confusion may result if the distinction is not heeded. For example, the statement that ‘A’ is given to cause that ‘B’ may be interpreted to mean that ‘A’ is itself a sufficient condition for ‘B’, or that it is only a necessary condition for ‘B’, or perhaps a necessary parts of a total sufficient condition. Lists of conditions to be met for satisfying some administrative or legal requirement frequently attempt to give individually necessary and jointly sufficient sets of conditions.

What is more that if any proposition of the form if ‘p’ then ‘q?’. The condition hypothesized, ‘p’, is called the antecedent of the conditionals, and ‘q’, the consequent? Various kinds of conditional have been distinguished. Its weakest is that of material implication, merely telling that either ‘not-p’, or ‘q’. Stronger conditionals include elements of modality, corresponding to the thought that if p is truer then q must be true. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether conditionals are better treated semantically, yielding differently finds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning with surface differences arising from other implicatures.

It follows from the definition of strict implication that a necessary proposition is strictly implied by any proposition, and that an impossible proposition strictly implies any proposition. If strict implication corresponds to q follows from p, then this means that a necessary proposition follows from anything at all, and anything at all follows from an impossible proposition. This is a problem if we wish to distinguish between valid and invalid arguments with necessary conclusions or impossible premises.

The Humean problem of induction is that if we would suppose that there is some property A concerning and observational or an experimental situation, and that out of a large number of observed instances of A, some fraction m/n (possibly equal to 1) has also been instances of some logically independent property B. Suppose further, that the backgrounds proportionate circumstances not specified in these descriptions have been varied to a substantial degree and that there is no collateral information available concerning the frequency of B's among A’s or concerning causal or nomologically connections between instances of ‘A’ and instances of ‘B’.

In this situation, an enumerative or instantial induction inference would move rights from the premise, that m/n of observed A’s are B's to the conclusion that approximately m/n of all A’s and all of the B's. (The usual probability qualification will be assumed to apply to the inference, rather than being part of the conclusion.) Here the class of A’s should be taken to include not only unobserved A’s and future A’s, but also possible or hypothetical A’s (an alternative conclusion would concern the probability or likelihood of the adjacently observed ‘A’ being a ‘B’).

The traditional or Humean problem of induction, often referred to simply as the problem of induction, is the problem of whether and why inferences that fit this schema should be considered rationally acceptable or justified from an epistemic or cognitive standpoint, i.e., whether and why reasoning in this way is likely to lead to true claims about the world. Is there any sort of argument or rationale that can be offered for thinking that conclusions reached in this way are likely to be true in the corresponding premisses is true - or even that their chances of truth are significantly enhanced?

Humes discussion of this issue deals explicitly only with cases where all observed A’s are all observed B's and his argument applies just as well to the more general case. His conclusion is entirely negative and sceptical: Inductive inferences are not rationally justified, but are instead the result of an essentially a-rational process, custom or habit. Hume (1711-76) challenges the proponent of induction to supply a cogent ligne of reasoning that leads from an inductive premise to the corresponding conclusion and offers an extremely influential argument in the form of a dilemma (a few times referred to as Humes fork), that either our actions are determined, in which case we are not responsible for them, or they are the result of random events, under which case we are also not responsible for them.

Such reasoning would, he argues, have to be either deductively demonstrative reasoning in the concerning relations of ideas or experimental, i.e., empirical, that reasoning concerning matters of fact or existence. It cannot be the former, because all demonstrative reasoning relies on the avoidance of contradiction, and it is not a contradiction to suppose that the course of nature may change, that an order that was observed in the past and not of its continuing against the future: But it cannot be, as the latter, since any empirical argument would appeal to the success of such reasoning about an experience, and the justifiability of generalizing from experience are precisely what is at issue-so that any such appeal would be question-begging. Hence, Hume concludes that there can be no such reasoning (1748).

An alternative version of the problem may be obtained by formulating it with reference to the so-called Principle of Induction, which says roughly that the future will resemble the past or, somewhat better, that unobserved cases will resemble observed cases. An inductive argument may be viewed as enthymematic, with this principle serving as a supposed premiss, in which case the issue is obviously how such a premiss can be justified. Humes argument is then that no such justification is possible: The principle cannot be justified a prior because having possession of been true in experiences without obviously begging the question is not contradictory to have possession of been true in experiences without obviously begging the question.

The predominant recent responses to the problem of induction, at least in the analytic tradition, in effect accept the main conclusion of Humes argument, namely, that inductive inferences cannot be justified in the sense of showing that the conclusion of such an inference is likely to be true if the premise is true, and thus attempt to find another sort of justification for induction. Such responses fall into two main categories: (I) Pragmatic justifications or vindications of induction, mainly developed by Hans Reichenbach (1891-1953), and (ii) ordinary language justifications of induction, whose most important proponent is Frederick, Peter Strawson (1919-). In contrast, some philosophers still attempt to reject Humes Dilemma by arguing either (iii) That, contrary to appearances, induction can be inductively justified without vicious circularity, or (iv) that an anticipatory justification of induction is possible after all. In that:

(1) Reichenbachs view is that induction is best regarded, not as a form of inference, but rather as a method for arriving at posits regarding, i.e., the proportion of As remain additionally of B's. Such a posit is not a claim asserted to be true, but is instead an intellectual wager analogous to a bet made by a gambler. Understood in this way, the inductive method says that one should posit that the observed proportion is, within some measure of an approximation, the true proportion and then continually correct that initial posit as new information comes in.

The gamblers bet is normally an appraised posit, i.e., he knows the chances or odds that the outcome on which he bets will actually occur. In contrast, the inductive bet is a blind posit: We do not know the chances that it will succeed or even that success is that it will succeed or even that success is possible. What we are gambling on when we make such a bet is the value of a certain proportion in the independent world, which Reichenbach construes as the limit of the observed proportion as the number of cases increases to infinity. Nevertheless, we have no way of knowing that there are even such a limit, and no way of knowing that the proportion of A’s are in addition of B's converges in the end on some stable value than varying at random. If we cannot know that this limit exists, then we obviously cannot know that we have any definite chance of finding it.

What we can know, according to Reichenbach, is that if there is a truth of this sort to be found, the inductive method will eventually find it. That this is so is an analytic consequence of Reichenbachs account of what it is for such a limit to exist. The only way that the inductive method of making an initial posit and then refining it in light of new observations can fail eventually to arrive at the true proportion is if the series of observed proportions never converges on any stable value, which means that there is no truth to be found pertaining the proportion of A’s, additionally constitute B's. Thus, induction is justified, not by showing that it will succeed or indeed, that it has any definite likelihood of success, but only by showing that it will succeed if success is possible. Reichenbachs claim is that no more than this can be established for any method, and hence that induction gives us our best chance for success, our best gamble in a situation where there is no alternative to gambling.

This pragmatic response to the problem of induction faces several serious problems. First, there are indefinitely many other methods for arriving at posits for which the same sort of defence can be given-methods that yields the same result as the inductive method over time but differ arbitrarily before long. Despite the efforts of others, it is unclear that there is any satisfactory way to exclude such alternatives, in order to avoid the result that any arbitrarily chosen short-term posit is just as reasonable as the inductive posit. Second, even if there is a truth of the requisite sort to be found, the inductive method is only guaranteed to find it or even to come within any specifiable distance of it in the indefinite long run. All the same, any actual application of inductive results always takes place in the presence to the future eventful states in making the relevance of the pragmatic justification to actual practice uncertainly. Third, and most important, it needs to be emphasized that Reichenbachs response to the problem simply accepts the claim of the Humean sceptic that an inductive premise never provides the slightest reason for thinking that the corresponding inductive conclusion is true. Reichenbach himself is quite candid on this point, but this does not alleviate the intuitive implausibility of saying that we have no more reason for thinking that our scientific and commonsense conclusions that result in the induction of it . . . is true than, to utilized function through a dynamic of Reichenbachs own analogy (1949), a blind man wandering in the mountains who feels an apparent trail with his stick has for thinking that following it will lead him to safety.

An approach to induction resembling Reichenbachs claiming in that those particular inductive conclusions are posits or conjectures, than the conclusions of cogent inferences, is offered by Popper. However, Poppers view is even more overtly sceptical: It amounts to saying that all that can ever be said in favour of the truth of an inductive claim is that the claim has been tested and not yet been shown to be false.

(2) The ordinary language response to the problem of induction has been advocated by many philosophers, nonetheless, Strawson claims that the question whether induction is justified or reasonable makes sense only if it tacitly involves the demand that inductive reasoning meet the standards appropriate to deductive reasoning, i.e., that the inductive conclusions are shown to follow deductively from the inductive assumption. Such a demand cannot, of course, be met, but only because it is illegitimate: Inductive and deductive reasons are simply fundamentally different kinds of reasoning, each possessing its own autonomous standards, and there is no reason to demand or expect that one of these kinds meet the standards of the other. Whereas, if induction is assessed by inductive standards, the only ones that are appropriate, then it is obviously justified.

The problem here is to understand to what this allegedly obvious justification of an induction amount. In his main discussion of the point (1952), Strawson claims that it is an analytic true statement that believing it a conclusion for which there is strong evidence is reasonable and an analytic truth that inductive evidence of the sort captured by the schema presented earlier constitutes strong evidence for the corresponding inductive conclusion, thus, apparently yielding the analytic conclusion that believing it a conclusion for which there is inductive evidence is reasonable. Nevertheless, he also admits, indeed insists, that the claim that inductive conclusions will be true in the future is contingent, empirical, and may turn out to be false (1952). Thus, the notion of reasonable belief and the correlative notion of strong evidence must apparently be understood in ways that have nothing to do with likelihood of truth, presumably by appeal to the standard of reasonableness and strength of evidence that are accepted by the community and are embodied in ordinary usage.

Understood in this way, Strawson’s response to the problem of inductive reasoning does not speak to the central issue raised by Humean scepticism: The issue of whether the conclusions of inductive arguments are likely to be true. It amounts to saying merely that if we reason in this way, we can correctly call ourselves reasonable and our evidence strong, according to our accepted community standards. Nevertheless, to the undersealing of issue of wether following these standards is a good way to find the truth, the ordinary language response appears to have nothing to say.

(3) The main attempts to show that induction can be justified inductively have concentrated on showing that such as a defence can avoid circularity. Skyrms (1975) formulate, perhaps the clearest version of this general strategy. The basic idea is to distinguish different levels of inductive argument: A first level in which induction is applied to things other than arguments: A second level in which it is applied to arguments at the first level, arguing that they have been observed to succeed so far and hence are likely to succeed in general: A third level in which it is applied in the same way to arguments at the second level, and so on. Circularity is allegedly avoided by treating each of these levels as autonomous and justifying the argument at each level by appeal to an argument at the next level.

One problem with this sort of move is that even if circularity is avoided, the movement to Higher and Higher levels will clearly eventually fail simply for lack of evidence: A level will reach at which there have been enough successful inductive arguments to provide a basis for inductive justification at the next Higher level, and if this is so, then the whole series of justifications collapses. A more fundamental difficulty is that the epistemological significance of the distinction between levels is obscure. If the issue is whether reasoning in accord with the original schema offered above ever provides a good reason for thinking that the conclusion is likely to be true, then it still seems question-begging, even if not flatly circular, to answer this question by appeal to anther argument of the same form.

(4) The idea that induction can be justified on a pure priori basis is in one way the most natural response of all: It alone treats an inductive argument as an independently cogent piece of reasoning whose conclusion can be seen rationally to follow, although perhaps only with probability from its premise. Such an approach has, however, only rarely been advocated (Russell, 19132 and BonJour, 1986), and is widely thought to be clearly and demonstrably hopeless.

Many on the reasons for this pessimistic view depend on general epistemological theses about the possible or nature of anticipatory cognition. Thus if, as Quine alleges, there is no a prior justification of any kind, then obviously a prior justification for induction is ruled out. Or if, as more moderate empiricists have in claiming some preexistent knowledge should be analytic, then again a prevenient justification for induction seems to be precluded, since the claim that if an inductive premise is truer, then the conclusion is likely to be true does not fit the standard conceptions of analyticity. A consideration of these matters is beyond the scope of the present spoken exchange.

There are, however, two more specific and quite influential reasons for thinking that an early approach is impossible that can be briefly considered, first, there is the speculative assertion as the assumption, being inaugurated through that of Hume, but since adopted by very many of others, that a move forward in the defence of induction would have to involve turning induction into deduction, i.e., showing, per impossible, that the inductive conclusion follows deductively from the premise, so that it is a formal contradiction to accept the latter and deny the former. However, it is unclear why a prior approach need be committed to anything this strong. It would be enough if it could be argued that it is deductively unlikely that such a premise is true and corresponding conclusion false.

Second, Reichenbach defends his view that pragmatic justification is the best that is possible by pointing out that a completely chaotic world in which there is simply not true conclusion to be found as to the proportion of A’s in addition that occurs of, but B's is neither impossible nor unlikely from a purely a prior standpoint, the suggestion being that therefore there can be no a prior reason for thinking that such a conclusion is true. Nevertheless, there is still a substring way in laying that a chaotic world is a prior neither impossible nor unlikely without any further evidence does not show that such a world os not a prior unlikely and a world containing such-and-such regularity might anticipatorially be somewhat likely in relation to an occurrence of a long running pattern of evidence in which a certain stable proportion of observed A’s are B's ~. An occurrence, it might be claimed, that would be highly unlikely in a chaotic world (BonJour, 1986).

Goodmans new riddles of induction purport that we suppose that before some specific time-t (perhaps the year 2000) we observe a larger number of emeralds (property A) and find them all to be green (property B). We proceed to reason inductively and conclude that all emeralds are green Goodman points out, however, that we could have drawn a quite different conclusion from the same evidence. If we define the term grue to mean green if examined before time-t and blue examined after time-t ʹ, then all of our observed emeralds will also be gruing. A parallel inductive argument will yield the conclusion that all emeralds are gruing, and hence that all those examined after the year 2000 will be blue. Presumably the first of these concisions is genuinely supported by our observations and the second is not. Nevertheless, the problem is to say why this is so and to impose some further restriction upon inductive reasoning that will permit the first argument and exclude the second.

The obvious alternative suggestion is that grue. Similar predicates do not correspond to genuine, purely qualitative properties in the way that green and blueness does, and that this is why inductive arguments involving them are unacceptable. Goodman, however, claims to be unable to make clear sense of this suggestion, pointing out that the relations of formal desirability are perfectly symmetrical: Grue may, perhaps, be defined in terms if, green and blue, but green an equally well be defined in terms of gruing and green (blue if examined before t and green if examined after time-t).

The extremities that grued paradoxes demonstrate the importance of categorization, in that sometimes it is itemized as gruing, if examined of a presence to the future, before future time t and green, or not so examined and blue. Even though all emeralds in our evidence class grue, we ought must infer that all emeralds are gruing. For gruing is unprojectible, and cannot transmit credibility from known to unknown cases. Only projectable predicates are right for induction. Goodman considers entrenchment the key to projectability having a long history of successful protection, grues is entrenched, lacking such a history, grues is not. A hypothesis is projectable, Goodman suggests, only if its predicates (or suitable related ones) are much better entrenched than its rivalrous past successes that do not assume future ones. Induction remains a risky business. The rationale for favouring entrenched predicates is pragmatic. Of the possible projections from our evidence class, the one that fits with past practices enables us to utilize our cognitive resources best. Its prospects of being true are worse than its competitors and its cognitive utility is greater.

So, to a better understanding of induction we should then literize its term for which is most widely used for any process of reasoning that takes us from empirical premises to empirical conclusions supported by the premises, but not deductively entailed by them. Inductive arguments are therefore kinds of applicative arguments, in which something beyond the content of the premise is inferred as probable or supported by them. Induction is, however, commonly distinguished from arguments to theoretical explanations, which share this applicative character, by being confined to inferences in which he conclusion involves the same properties or relations as the premises. The central example is induction by simple enumeration, where from premises telling that Fa, Fb, Fc . . . whose applicable terminological attestations as grounded within those that successively within their accomplishing of, a, b’s, c’s and so forth, are all of some kind in ‘G’, it is inferred that G's from outside the sample, such as future G's, will be F, or perhaps that all G's are ‘F’. In this, which and the other persons deceive them, children may infer that everyone is a deceiver: Different, but similar inferences of a property by some object to the same objects future possession of the same property, or from the constancy of some law-like pattern in events and states of affairs to its future constancy. All objects we know of attract each other with a force inversely proportional to the square of the distance between them, so perhaps they all do so, and will always do so.

The rational basis of any inference was challenged by Hume, who believed that induction presupposed belief in the uniformity of nature, but that this belief has no defence in reason, and merely reflected a habit or custom of the mind. Hume was not therefore sceptical about the role of reason in either explaining it or justifying it. Trying to answer Hume and to show that there is something rationally compelling about the inference referred to as the problem of induction. It is widely recognized that any rational defence of induction will have to partition well-behaved properties for which the inference is plausible (often called projectable properties) from badly behaved ones, for which it is not. It is also recognized that actual inductive habits are more complex than those of similar enumeration, and that both common sense and science pay attention to such giving factors as variations within the sample giving us the evidence, the application of ancillary beliefs about the order of nature, and so on.

Nevertheless, the fundamental problem remains that and experiential conditions by application show us only events occurring within a very restricted part of a vast spatial and temporal order about which we then come to believe things.

Uncompounded by its belonging of a confirmation theory finding of the measure to which evidence supports a theory fully formalized confirmation theory would dictate the degree of confidence that a rational investigator might have in a theory, given somebody of evidence. The grandfather of confirmation theory is Gottfried Leibniz (1646-1718), who believed that a logically transparent language of science would be able to resolve all disputes. In the 20th century a fully formal confirmation theory was a main goal of the logical positivist, since without it the central concept of verification by empirical evidence itself remains distressingly unscientific. The principal developments were due to Rudolf Carnap (1891-1970), culminating in his Logical Foundations of Probability (1950). Carnaps idea was that the measure necessitated would be the proportion of logically possible states of affairs in which the theory and the evidence both hold, compared to the number in which the evidence itself holds that the probability of a preposition, relative to some evidence, is a proportion of the range of possibilities under which the proposition is true, compared to the total range of possibilities left by the evidence. The difficulty with the theory lies in identifying sets of possibilities so that they admit of measurement. It therefore demands that we can put a measure on the range of possibilities consistent with theory and evidence, compared with the range consistent with the evidence alone.

Among the obstacles the enterprise meets, is the fact that while evidence covers only a finite range of data, the hypotheses of science may cover an infinite range. In addition, confirmation proves to vary with the language in which the science is couched, and the Carnapian programme has difficulty in separating genuinely confirming variety of evidence from less compelling repetition of the same experiment. Confirmation also proved to be susceptible to acute paradoxes. Finally, scientific judgement seems to depend on such intangible factors as the problems facing rival theories, and most workers have come to stress instead the historically situated scene of what would appear as a plausible distinction of a scientific knowledge at a given time.

Arose to the paradox of which when a set of apparent incontrovertible premises is given to unacceptable or contradictory conclusions. To solve a paradox will involve showing either that there is a hidden flaw in the premises, or that the reasoning is erroneous, or that the apparently unacceptable conclusion can, in fact, be tolerated. Paradoxes are therefore important in philosophy, for until one is solved it shows that there is something about our reasoning and our concepts that we do not understand. What is more, and somewhat loosely, a paradox is a compelling argument from unacceptable premises to an unacceptable conclusion: More strictly speaking, a paradox is specified to be a sentence that is true if and only if it is false. A characterized objection deterrently examples grue as that in what of possible actions would be: The displayed sentence is false.

Seeing that this sentence is false if true is easy, and true if false, a paradox, in either of the senses distinguished, presents an important philosophical challenger. Epistemologists are especially concerned with various paradoxes having to do with knowledge and belief. In other words, for example, the Knower paradox is an argument that begins with apparently impeccable premisses about the concepts of knowledge and inference and derives an explicit contradiction. The origin of the reasoning is the surprise examination paradox: A teacher announces that there will be a surprise examination next week. A clever student argues that this is impossible. The test cannot be on Friday, the last day of the week, because it would not be a surprise. We would know the day of the test on Thursday evening. This means we can also rule out Thursday. For after we learn that no test has been given by Wednesday, we would know the test is on Thursday or Friday -and would already know that it s not on Friday and would already know that it is not on Friday by the previous reasoning. The remaining days can be eliminated in the same manner.

This puzzle has over a dozen variants. The first was probably invented by the Swedish mathematician Lennard Ekbon in 1943. Although the first few commentators regarded the reverse elimination argument as cogent, every writer on the subject since 1950 agrees that the argument is unsound. The controversy has been over the proper diagnosis of the flaw.

Initial analyses of the subjects argument tried to lay the blame on a simple equivocation. Their failure led to more sophisticated diagnoses. The general format has been an assimilation to better-known paradoxes. One tradition casts the surprise examination paradox as a self-referential problem, as fundamentally akin to the Liar, the paradox of the Knower, or Gödels incompleteness theorem. That in of itself, says enough that Kaplan and Montague (1960) distilled the following self-referential paradox, the Knower. Consider the sentence: (S) The negation of this sentence is known (to be true).

Suppose that (S) is true. Then its negation is known and hence true. However, if its negation is true, then (S) must be false. Therefore (s) is false, or what is the name, the negation of (S) is true.

This paradox and its accompanying reasoning are strongly reminiscent of the Lair Paradox that (in one version) begins by considering a sentence this sentence is false and derives a contradiction. Versions of both arguments using axiomatic formulations of arithmetic and Gödel-numbers to achieve the effect of self-reference yields important meta-theorems about what can be expressed in such systems. Roughly these are to the effect that no predicates definable in the formalized arithmetic can have the properties we demand of truth (Tarskis Theorem) or of knowledge (Montague, 1963).

These meta-theorems still leave us; with the problem that if we suppose that we add of these formalized languages predicates intended to express the concept of knowledge (or truth) and inference - as in agreement it satisfies a logic of these concepts, and, if so, is desired. Then the sentence expressing the leading principles of the Knower Paradox will be true.

Explicitly, the assumption about knowledge and inferences are:

(1) If sentences A are known, then a.

(2) (1) is known?

(3) If B is correctly inferred from A, and A is known, then B is known.

To give an absolutely explicit t derivation of the paradox by applying these principles to (S), we must add (contingent) assumptions to the effect that certain inferences have been done. Still, as we go through the argument of the Knower, these inferences are done. Even if we can somehow restrict such principles and construct a consistent formal logic of knowledge and inference, the paradoxical argument as expressed in the natural language still demands some explanation.

The usual proposals for dealing with the Liar often have their analogues for the Knower, e.g., that there is something wrong with a self-reference or that knowledge (or truth) is properly a predicate of propositions and not of sentences. The remedial extremities that show that some of these are not adequate are often parallel to those for the Liar paradox. In addition, one can try here what seems to be an adequate solution for the Surprise Examination Paradox, namely the observation that new knowledge can drive out knowledge, but this does not seem to work on the Knower (Anderson, 1983).

There are a number of paradoxes of the Liar family. The simplest example is the sentence. This sentence is false, which must be false if it is true, and true if it is false. One suggestion is that the sentence fails to say anything, but sentences that fail to say anything are at least not true. In fact case, we consider to sentences. This sentence is not true, which, if it fails to say anything is not true, and hence (this kind of reasoning is sometimes called the strengthened Liar). Other versions of the Liar introduce pairs of sentences, as in a slogan on the front of a T-shirt saying this sentence on the back of this T-shirt is false, and one on the back saying the sentence on the front of this T-shirt is true. It is clear that each sentence individually is well formed, and was not for the other, might have said something true. So any attempt to dismiss the paradox by settling in that of the sentence involved are meaningless will face problems.

Even so, the two approaches that have some hope of adequately dealing with this paradox is hierarchy solutions and truth-value gap solutions. According to the first, knowledge is structured into levels. It is argued that there be one-careened notions expressed by the verb; knows, but rather a whole series of notions, of the knowable knows, and so on (perhaps into transfinite), stated ion terms of predicate expressing such ramified concepts and properly restricted, (1)-(3) lead to no contradictions. The main objections to this procedure are that the meaning of these levels has not been adequately explained and that the idea of such subscripts, even implicit, in a natural language is highly counterintuitive the truth-value gap solution takes sentences such as (S) to lack truth-value. They are neither true nor false, but they do not express propositions. This defeats a crucial step in the reasoning used in the derivation of the paradoxes. Kripler (1986) has developed this approach in connexion with the Liar and Asher and Kamp (1986) has worked out some details of a parallel solution to the Knower. The principal objection is that strengthened or super versions of the paradoxes tend to reappear when the solution itself and is stated.

Since the paradoxical deduction uses only the properties (1)-(3) and since the argument is formally valid, any notion that satisfies these conditions will lead to a paradox. Thus, Grim (1988) notes that this may be read as is known by an omniscient God and concludes that there is no careened single notion of omniscience. Thomason (1980) observes that with some different conditions, analogous reasoning about belief can lead to paradoxical consequence.

Overall, it looks as if we should conclude that knowledge and truth are ultimately intrinsically stratified concepts. It would seem that we must simply accept the fact that these (and similar) concepts cannot be assigned of any one fixed, finite or infinite. Still, the meaning of this idea certainly needs further clarification.

Its paradox arises when a set of apparently incontrovertible premises gives unacceptable or contradictory conclusions, to solve a paradox will involve showing either that there is a hidden flaw in the premises, or that the reasoning is erroneous, or that the apparently unacceptable conclusion can, in fact, be tolerated. Paradoxes are therefore important in philosophy, for until one is solved its show that there is something about our reasoning and of concepts that we do not understand. Famous families of paradoxes include the semantic paradoxes and Zeno’s paradoxes. Art the beginning of the 20th century, paradox and other set-theoretical paradoxes led to the complete overhaul of the foundations of set theory, while the Sorites paradox has lead to the investigations of the semantics of vagueness and fuzzy logics.

It is, however, to what extent can analysis be informative? This is the question that gives a riser to what philosophers has traditionally called the paradox of analysis. Thus, consider the following proposition:

(1) To be an instance of knowledge is to be an instance of justified true belief not essentially grounded in any falsehood. (1) If true, illustrates an important type of philosophical analysis. For convenience of exposition, I will be to assume of the tenet (1) is a correct analysis. The paradox arises from the fact that if the concept of justified true belief not been essentially grounded in any falsification is the analysand of the concept of knowledge, it would seem that they are the same concept and hence that: (2) To be an instance of knowledge is to be as an instance of knowledge and would have to be the same propositions as (1). But then how can (1) be informative when (2) is not? This is what is called the first paradox of analysis. Classical writings on analysis suggest a second paradoxical analysis (Moore, 1942).

(3) An analysis of the concept of being a brother is that to be a brother, is tobe characterized for being a male sibling. If (3) is true, it would seem that the concept of being a brother would have to be the same concept as the concept of being a male sibling and that:

(4) An analysis of the concept of being a brother is that to be a brother is to be a brother would also have to be true and in fact, would have to be the same proposition as (3?). Yet (3) is true and (4) is false.

Both these paradoxes rest upon the assumptions that analysis is a relation between concepts, than one involving entity of other sorts, such as linguistic expressions, and tat in a true analysis, analysand and analysandum are the same concepts. Both these assumptions are explicit in Moore, however, some of Moores commentary elucidation’s hint or are given to suggest of a solution to that of another statement of an analysis is a statement partly about the concept involved and partly about the verbal expressions used to express it. He says he thinks a solution of this sort is bound to be right, but fails to suggest one because he cannot see a way in which the analysis can be even partly about the expression (Moore, 1942).

Elsewhere, of such ways, as a solution to the second paradox, to explicate upon the tenets of (3) as: (5) - An analysis is given by saying that the verbal expression ‘χ’ is a brother, and has the valuing quality in the expressions that the same concept as is expressed by the conjunction of the verbal expressions ‘χ’ is male when used to express the concept of being male and ‘χ’ is a sibling when used to express the concept of being a sibling. (Ackerman, 1990). An important point about (5) is as follows. Stripped of its philosophical jargon (analysis, concept, ‘χ’ is a . . . ), (5) seems to state the sort of information generally stated in a definition of the verbal expression brother in terms of the verbal expressions male and sibling, where this definition is designed to draw upon listeners antecedent understanding of the verbal expression male and sibling, and thus, to tell listeners what the verbal expression brother really means, instead of merely providing the information that two verbal expressions are synonymous without specifying the meaning of either one. Thus, its solution to the second paradox seems to make the sort of analysis tat gives rise to this paradox matter of specifying the meaning of a verbal expression in terms of separate verbal expressions already understood and saying how the meanings of these separate, already-understood verbal expressions are combined. This corresponds to Moores intuitive requirements that an analysis should both specify the constituent concepts of the analysandum and tell how they are combined, but is this all there is to philosophical analysis?

We must note that, in addition too there being two paradoxes of analysis, there is two types of analyses that are relevant here. (There are also other types of analysis, such as reformatory analysis, where the analysands are intended to improve on and replace the analysandum. But since reformatory analysis involves no commitment to conceptual identity between analysand and analysandum, reformatory analysis does not generate a paradox of analysis and so will not concern us here.) One way to recognize the difference between the two types of analysis concerning us here is to focus on the difference between the two paradoxes. This can be done by means of the Frége-inspired sense-individuation condition, which is the condition that two expressions have the same sense if and only if they can be interchangeably salva veritate whenever used in propositional attitude context. If the expressions for the analysands and the analysandum in (1) met this condition, (1) and (2) would not raise the first paradox, but the second paradox arises regardless of whether the expression for the analysand and the analysandum meet this condition. The second paradox is a matter of the failure of such expressions to be interchangeable salva veritate in sentences involving such contexts as an analysis is given thereof. Thus, a solution (such as the one offered) that is aimed only at such contexts can solve the second paradox. This is clearly false for the first paradox, however, which will apply to all pairs of propositions expressed by sentences in which expressions for pairs of analysands and analysantia raising the first paradox is interchangeable. One approach to the first paradox is to argue that, despite the apparent epistemic inequivalence of (1) and (2), the concept of justified true belief not essentially grounded in any falsehood is still identical with the concept of knowledge (Sosa, 1983). Another approach is to argue that in the sort of analysis raising the first paradox, the analysand and analysandum is the concept within which implicate there differenced, but they bear a special epistemic relation to each other. Elsewhere, the development is such an approach and suggestion that this analysand-analysandum relation has the following facets.

(I) The analysand and analysandum are necessarily coextensive, i.e., necessarily every instance of one is an instance of the other.

(ii) The analysand and analysandum are knowable theoretical to be coextensive.

(iii) The analysandum is simplest, that the analysands condition, whose necessity is recognized in classical writings on analysis, such as, Langford, 1942.

(iv) The analysands do not have the analysandum as a constituent.

Condition (iv) rules out circularity. But since many valuable quasi-analyses are partly circular, e.g., knowledge is justified true belief supported by known reasons not essentially grounded in any falsehood, it seems best to distinguish between full analysis, from that of (iv) is a necessary condition, and partial analysis, for which it is not.

These conditions, while necessary, are clearly insufficient. The basic problem is that they apply too many pairs of concepts that do not seem closely enough related epistemologically to count as analysand and analysandum. Such as the concept of being and the concept of the fourth root of 1296. Accordingly, its solution upon what actually seems epistemologically distinctive about analyses of the sort under consideration, which is a certain way they can be justified. This is by the philosophical example-and-counterexample method, which is in a general term that goes as follows: 'J’s explorations (into) or inquire (into) or merely a look (into) the delving investigates by scrutinizing the basic elemental essentials in the analysis of 'K's' concept, 'Q' (where 'K' can but need not be identical to 'J' by setting 'K' a series of armchair thought experiments, i.e., presenting 'K' with a series of simple described hypothetical test cases and asking 'K' questions of the form if such-and-such where the case would this count as a case of 'Q'? J then contrasts the descriptions of the cases to which; 'K' answers affirmatively with the description of the cases to which 'K' does not, and 'J' generalizes upon these descriptions to arrive at the concepts (if possible not including the analysandum) and their mode of combination that constitute the analysand of 'K's' concept 'Q'. Since 'J' need not be identical with 'K', there is no requirement that K himself be able to perform this generalization, to recognize its result as correct, or even to understand the analysand that is its result. This is reminiscent of Walton's observation that one can simply recognize a bird as a blue jay without realizing just what feature of the bird (beak, wing configurations, etc.) form the basis of this recognition. (The philosophical significance of this way of recognizing is discussed in Walton, 1972) 'K' answers the questions based solely on whether the described hypothetical cases just strike him as cases of 'Q'. 'J' observes certain strictures in formulating the cases and questions. He makes the cases as simple as possible, to minimize the possibility of confusion and to minimize the likelihood that 'K' will draw upon his philosophical theories (or quasi-philosophical, a rudimentary notion if he is unsophisticated philosophically) in answering the questions. For this conflicting result, the conflict should other things being equal be resolved in favour of the simpler case. 'J' makes the series of described cases wide-ranging and varied, with the aim of having it be a complete series, where a series is complete if and only if no case that is omitted in such that, if included, it would change the analysis arrived at. 'J' does not, of course, use as a test-case description anything complicated and general enough to express the analysand. There is no requirement that the described hypothetical test cases be formulated only in terms of what can be observed. Moreover, using described hypothetical situations as test cases enables 'J' to frame the questions in such a way as to rule out extraneous background assumption to a degree, thus, even if 'K' correctly believes that all and only 'P's' are 'R's', the question of whether the concepts of 'P', 'R', or both enter the analysand of his concept 'Q' can be investigated by asking him such questions as Suppose (even if it seems preposterous to you) that you were to find out that there was a 'P' that was not an 'R'. Would you still consider it a case of 'Q'?

Taking all this into account, the necessary conditions for this sort of analysand-analysandum relations are as follows: If 'S' is the analysand of 'Q', the proposition that necessarily all and only instances of S are instances of 'Q' can be justified by generalizing from intuition about the correct answers to questions of the sort indicated about a varied and wide-ranging series of simple described hypothetical situations. It so does occur of antinomy, when we are able to argue for, or demonstrate, both a proposition and its contradiction, roughly speaking, a contradiction of a proposition 'p' is one that can be expressed in form 'not-p', or, if 'p' can be expressed in the form 'not-q', then a contradiction is one that can be expressed in the form 'q'. Thus, e.g., if p is 2 + 1 = 4, then, 2 + 1 ≠4 is the contradictory of 'p', for 2 + 1 ≠ 4 can be expressed in the nonforming of (2 + 1 = 4). If p is 2 + 1 ≠4, then 2 + 1 - 4 is a contradictory of 'p', since 2 + 1 ≠4 can be expressed in the form not (2 + 1 = 4). This is, mutually, but contradictory propositions can be expressed in the form, 'r', 'not-r'. The Principle of Contradiction says that mutually contradictory propositions cannot both be true and cannot both be false. Thus, by this principle, since if p is true, not-p is false, no proposition p can be at once true and false (otherwise both 'p' and its contradictories would be false?). In particular, for any predicate 'p' and object 'χ', it cannot be that 'p'; is at once true of 'χ' and false of 'χ'? This is the classical formulation of the principle of contradiction, but it is nonetheless, that we cannot now fault either demonstrates. We would eventually hope to be able to solve the antinomy by managing, through careful thinking and analysis, eventually to fault either or both demonstrations.

The conjunction of a proposition and its negation, where the law of non-contradiction provides that no such conjunction can be true: not (p & not-p). The standard proof of the inconsistency of a set of propositions or sentences is to show that a contradiction may be derived from them.

In Hegelian and Marxist writing the term is used more widely, as a contradiction may be a pair of features that together produce an unstable tension in a political or social system: a 'contradiction' of capitalism might be the aerosol of expectations in the workers that the system cannot require. For Hegel the gap between this and genuine contradiction is not as wide as it is for other thinkers, given the equation between systems of thought and their historical embodiment.

A contradictarian approach to problems of ethics asks what solution could be agreed upon by contradicting parties, starting from certain idealized positions (for example, no ignorance, no inequalities of power enabling one party to force unjust solutions upon another, no malicious ambitions). The idea of thinking of civil society, with its different distribution of rights and obligations, as if it were established by a social contract, derives from the English philosopher and mathematician Thomas Hobbes and Jean-Jacques Rousseau (1712-78). The utility of such a model was attacked by the Scottish philosopher, historian and essayist David Hume (1711-76), who ask why, given that a non-historical event of establishing a contract took place. It is useful to allocate rights and duties as if it had; he also points out that the actual distribution of these things in a society owes too much to contingent circumstances to be derivable from any such model. Similar positions in general ethical theory, sometimes called contradictualism: see the right thing to do so one that could be agreeing upon in a hypothetical contract.

Somewhat loosely, a paradox arises when a set of apparent incontrovertible premises gives unacceptable or contradictory conclusions, to solve a paradox will involve either showing that there is a hidden flaw in the premises, or that the reasoning is erroneous, or that the apparent unacceptable conclusion can, in fact, be tolerated. Paradoxes are themselves important in philosophy, for until one is solved it shows that there is something that we do not understand. Such are the paradoxes as compelling arguments from unexceptionable premises to an unacceptable conclusion, and more strictly, a paradox is specified to be a sentence that is true if and only if it is false: For example of the latter would be: 'The displayed sentence is false.

It is easy to see that this sentence is false if true, and true if false. A paradox, in either of the senses distinguished, presents an important philosophical challenge. Epistemologists are especially concerned with various paradoxes having to do with knowledge and belief.

Moreover, paradoxes are as an easy source of antinomies, for example, Zeno gave some famously lets say, logical-non-mathematical arguments that might be interpreted as demonstrating that motion is impossible. But our eyes as it was, demonstrate motion (exhibit moving things) all the time. Where did Zeno go wrong? Where do our eyes go wrong? If we cannot readily answer at least one of these questions, then we are in antinomy. In the Critique of Pure Reason, Kant gave demonstrations of the same kind -in the Zeno example they were obviously not the same kind of both, e.g., that the world has a beginning in time and space, and that the world has no beginning in time or space. He argues that both demonstrations are at fault because they proceed on the basis of pure reason unconditioned by sense experience.

At this point, we display attributes to the theory of experience, as it is not possible to define in an illuminating way, however, we know what experiences are through acquaintances with some of our own, e.g., visual experiences of as afterimage, a feeling of physical nausea or a tactile experience of an abrasive surface (which might be caused by an actual surface -rough or smooth, or which might be part of a dream, or the product of a vivid sensory imagination). The essential feature of experience is it feels a certain way -that there is something that it is like to have it. We may refer to this feature of an experience as its character.

Another core feature of the sorts of experiences with which this may be of a concern, is that they have representational content. (Unless otherwise indicated, experience will be reserved for their contentual representations.) The most obvious cases of experiences with content are sense experiences of the kind normally involved in perception. We may describe such experiences by mentioning their sensory modalities ad their contents, e.g., a gustatory experience (modality) of chocolate ice cream (content), but do so more commonly by means of perceptual verbs combined with noun phrases specifying their contents, as in Macbeth saw a dagger. This is, however, ambiguous between the perceptual claim. There was a (material) dagger in the world that Macbeth perceived visually and Macbeth had a visual experience of a dagger (the reading with which we are concerned, as it is afforded by our imagination, or perhaps, experiencing mentally hallucinogenic imagery).

As in the case of other mental states and events with content, it is important to distinguish between the properties that and experience represents and the properties that it possesses. To talk of the representational properties of an experience is to say something about its content, not to attribute those properties to the experience itself. Like every other experience, a visual; experience of a non-shaped square, of which is a mental event, and it is therefore not itself, or find to some irregularity or is it square, even though it represents those properties. It is, perhaps, fleeting, pleasant or unusual, even though it does not represent those properties. An experience may represent a property that it possesses, and it may even do so in virtue of a rapidly changing (complex) experience representing something as changing rapidly. However, this is the exception and not the rule.

Which properties can be [directly] represented in sense experience is subject to debate. Traditionalists include only properties whose presence could not be doubted by a subject having appropriate experiences, e.g., colour and shape in the case of visual experience, and apparent shape, surface texture, hardness, etc., in the case of tactile experience. This view is natural to anyone who has an egocentric, Cartesian perspective in epistemology, and who wishes for pure data in experiences to serve as logically certain foundations for knowledge, especially to the immediate objects of perceptual awareness in or of sense-data, such categorized of colour patches and shapes, which are usually supposed distinct from surfaces of physical objectivity. Qualities of sense-data are supposed to be distinct from physical qualities because their perception is more relative to conditions, more certain, and more immediate, and because sense-data is private and cannot appear other than they are they are objects that change in our perceptual field when conditions of perception change: Physical objects remain constant.

Others who do not think that this wish can be satisfied, and who are more impressed with the role of experience in providing animisms with ecologically significant information about the world around them, claim that sense experiences represent properties, characteristic and kinds that are much richer and much more wide-ranging than the traditional sensory qualities. We do not see only colours and shapes, they tell us, but also earth, water, men, women and fire: We do not smell only odours, but also food and filth. There is no space here to examine the factors relevantly responsible to their choice of situational alternatives. Yet, this suggests that character and content are not really distinct, and there is a close tie between them. For one thing, the relative complexity of the character of sense experience places limitations upon its possible content, e.g., a tactile experience of something touching ones left ear is just too simple to carry the same amount of content as typically convincing to an every day, visual experience. Moreover, the content of a sense experience of a given character depends on the normal causes of appropriately similar experiences, e.g., the sort of gustatory experience that we have when eating chocolate would be not represented as chocolate unless it was normally caused by chocolate. Granting a contingent ties between the character of an experience and its possible causal origins, once, again follows that its possible content is limited by its character.

Character and content are none the less irreducibly different, for the following reasons. (1) There are experiences that completely lack content, e.g., certain bodily pleasures. (2) Not every aspect of the character of an experience with content is relevant to that content, e.g., the unpleasantness of an aural experience of chalk squeaking on a board may have no representational significance. (3) Experiences in different modalities may overlap in content without a parallel overlap in character, e.g., visual and tactile experiences of circularity feel completely different. (4) The content of an experience with a given character may vary according to the background of the subject, e.g., a certain content singing bird only after the subject has learned something about birds.

According to the act/object analysis of experience (which is a special case of the act/object analysis of consciousness), every experience involves an object of experience even if it has no material object. Two main lines of argument may be offered in support of this view, one phenomenological and the other semantic.

In an outline, or projective view, the phenomenological argument is as follows. Whenever we have an experience, even if nothing beyond the experience answers to it, we seem to be presented with something through the experience (which is itself diaphanous). The object of the experience is whatever is so presented to us-is that it is an individual thing, an event, or a state of affairs.

The semantic argument is that objects of experience are required in order to make sense of certain features of our talk about experience, including, in particular, the following. (1) Simple attributions of experience, e.g., Rod is experiencing an oddity that is not really square but in appearance it seems more than likely a square, this seems to be relational. (2) We appear to refer to objects of experience and to attribute properties to them, e.g., The after-image that John experienced was certainly odd. (3) We appear to quantify ov er objects of experience, e.g., Macbeth saw something that his wife did not see.

The act/object analysis comes to grips with several problems concerning the status of objects of experiences. Currently the most common view is that they are sense-data - private mental entities that actually posses the traditional sensory qualities represented by the experiences of which they are the objects. But the very idea of an essentially private entity is suspect. Moreover, since an experience may apparently represent something as having a determinable property, e.g., redness, without representing it as having any subordinate determinate property, e.g., any specific shade of red, a sense-datum may actually have a determinate property subordinate to it. Even more disturbing is that sense-data may have contradictory properties, since experiences can have contradictory contents. A case in point is the waterfall illusion: If you stare at a waterfall for a minute and then immediately fixate on a nearby rock, you are likely to have an experience of the rocks moving upward while it remains in the same place. The sense-data theorist must either deny that there are such experiences or admit contradictory objects.

These problems can be avoided by treating objects of experience as properties. This, however, fails to do justice to the appearances, for experience seems not to present us with properties embodied in individuals. The view that objects of experience is Meinongian objects accommodate this point. It is also attractive in as far as (1) it allows experiences to represent properties other than traditional sensory qualities, and (2) it allows for the identification of objects of experience and objects of perception in the case of experiences that constitute perception.

According to the act/object analysis of experience, every experience with content involves an object of experience to which the subject is related by an act of awareness (the event of experiencing that object). This is meant to apply not only to perceptions, which have material objects (whatever is perceived), but also to experiences like hallucinations and dream experiences, which do not. Such experiences nonetheless appear to represent something, and their objects are supposed to be whatever it is that they represent. Act/object theorists may differ on the nature of objects of experience, which have been treated as properties. Meinongian objects (which may not exist or have any form of being), and, more commonly private mental entities with sensory qualities. (The term sense-data is now usually applied to the latter, but has also been used as a general term for objects of sense experiences, as in the work of G.E. Moore) Act/object theorists may also differ on the relationship between objects of experience and objects of perception. In terms of perception (of which we are indirectly aware) are always distinct from objects of experience (of which we are directly aware). Meinongian, however, may treat objects of perception as existing objects of experience. But sense-datum theorists must either deny that there are such experiences or admit contradictory objects. Still, most philosophers will feel that the Meinongians acceptance of impossible objects is too high a price to pay for these benefits.

A general problem for the act/object analysis is that the question of whether two subjects are experiencing one and the same thing (as opposed to having exactly similar experiences) appears to have an answer only on the assumption that the experiences concerned are perceptions with material objects. But in terms of the act/object analysis the question must have an answer even when this condition is not satisfied. (The answer is always negative on the sense-datum theory; it could be positive on other versions of the act/object analysis, depending on the facts of the case.)

In view of the above problems, the case for the act/object analysis should be reassessed. The Phenomenological argument is not, on reflection, convincing, for it is easy enough to grant that any experience appears to present us with an object without accepting that it actually does. The semantic argument is more impressive, but is none the less answerable. The seemingly relational structure of attributions of experience is a challenge dealt with below in connexion with the adverbial theory. Apparent reference to and quantification over objects of experience can be handled by analysing them as reference to experiences themselves and quantification over experiences tacitly typed according to content. Thus, The after-image that John experienced was colourfully appealing becomes Johns after-image experience was an experience of colour, and Macbeth saw something that his wife did not see becomes Macbeth had a visual experience that his wife did not have.

Pure cognitivism attempts to avoid the problems facing the act/object analysis by reducing experiences to cognitive events or associated disposition, e.g., Julie's experience of a rough surface beneath her hand might be identified with the event of her acquiring the belief that there is a rough surface beneath her hand, or, if she does not acquire this belief, with a disposition to acquire it that has somehow been blocked.

This position has attractions. It does full justice to the cognitive contents of experience, and to the important role of experience as a source of belief acquisition. It would also help clear the way for a naturalistic theory of mind, since there seems to be some prospect of a physicalist/functionalist account of belief and other intentional states. But pure cognitivism is completely undermined by its failure to accommodate the fact that experiences have a felt character that cannot be reduced to their content, as aforementioned.

The adverbial theory is an attempt to undermine the act/object analysis by suggesting a semantic account of attributions of experience that does not require objects of experience. Unfortunately, the oddities of explicit adverbializations of such statements have driven off potential supporters of the theory. Furthermore, the theory remains largely undeveloped, and attempted refutations have traded on this. It may, however, be founded on sound basis intuitions, and there is reason to believe that an effective development of the theory (which is merely hinting at) is possible.

The relevant intuitions are (1) that when we say that someone is experiencing an A, or has an experience of an A, we are using this content-expression to specify the type of thing that the experience is especially apt to fit, (2) that doing this is a matter of saying something about the experience itself (and maybe about the normal causes of like experiences), and (3) that it is no-good of reasons to posit of its position to presuppose that of any involvements, is that its descriptions of an object in which the experience is. Thus the effective role of the content-expression in a statement of experience is to modify the verb it compliments, not to introduce a special type of object.

Perhaps, the most important criticisms of the adverbial theory are the many property problems, according to which the theory does not have the resources to distinguish between, e.g.,

(1) Frank has an experience of a brown triangle

and:

(2) Frank has an experience of brown and an experience of a triangle.

Which is entailed by (1) but does not entail it. The act/object analysis can easily accommodate the difference between (1) and (2) by claiming that the truth of (1) requires a single object of experiential occasion, that is both brown and triangular, while that of the (2) allows for the possibility of two objects of experience, one brown and the other triangular, however, (1) are equivalent to:

(1*) Frank has an experience of something being both brown and triangular.

And (2) is equivalent to:

(2*) Frank has an experience of something being brown and an experience of something being triangular,

And the difference between these can be explained quite simply in terms of logical scope without invoking objects of experience. The adverbialists may use this to answer the many-property problem by arguing that the phrase a brown triangle in (1) does the same work as the clause something being both brown and triangular in (1*). This is perfectly compatible with the view that it also has the adverbial function of modifying the verb has an experience of, for it specifies the experience more narrowly just by giving a necessary condition for the satisfaction of the experience (the condition being that there are something both brown and triangular before Frank).

A final position that should be mentioned is the state theory, according to which a sense experience of an ‘A’ is an occurrent, non-relational state of the kind that the subject would be in when perceiving of an ‘A’. Suitably qualified, this claim is no doubt true, but its significance is subject to debate. Here it is enough to remark that the claim is compatible with both pure cognitivism and the adverbial theory, and that state theorists are probably best advised to adopt adverbials as a means of developing their intuitions.

Yet, clarifying sense-data, if taken literally, is that which is given by the senses. But in response to the question of what exactly is so given, sense-data theories posit private showings in the consciousness of the subject. In the case of vision this would be a kind of inner picture shown which it only indirectly represents aspects of the external world that has in and of itself a worldly representation. The view has been widely rejected as implying that we really only see extremely thin coloured pictures interposed between our mind’s eye and reality. Modern approaches to perception tend to reject any conception of the eye as a camera or lense, simply responsible for producing private images, and stress the active life of the subject in and of the world, as the determinant of experience.

Nevertheless, the argument from illusion is of itself the usually intended directive to establish that certain familiar facts about illusion disprove the theory of perception called naïevity or direct realism. There are, however, many different versions of the argument that must be distinguished carefully. Some of these distinctions centre on the content of the premises (the nature of the appeal to illusion); others centre on the interpretation of the conclusion (the kind of direct realism under attack). Let us set about by distinguishing the importantly different versions of direct realism which one might take to be vulnerable to familiar facts about the possibility of perceptual illusion.

A crude statement of direct realism might go as follows. In perception, we sometimes directly perceive physical objects and their properties, we do not always perceive physical objects by perceiving something else, e.g., a sense-datum. There are, however, difficulties with this formulation of the view, as for one thing a great many philosophers who are not direct realists would admit that it is a mistake to describe people as actually perceiving something other than a physical object. In particular, such philosophers might admit, we should never say that we perceive sense-data. To talk that way would be to suppose that we should model our understanding of our relationship to sense-data on our understanding of the ordinary use of perceptual verbs as they describe our relation to and of the physical world, and that is the last thing paradigm sense-datum theorists should need. At least, many of the philosophers who objected to direct realism would prefer to express in what they were of objecting too in terms of a technical (and philosophically controversial) concept such as acquaintance. Using such a notion, we could define direct realism this way: In veridical experience we are directly acquainted with parts, e.g., surfaces, or constituents of physical objects. A less cautious venison of the view might drop the reference to veridical experience and claim simply that in all experience we are directly acquainted with parts or constituents of physical objects. The expression’s knowledge by acquaintance and knowledge by description, and the distinction they mark between knowing things and knowing about things, are generally associated with Bertrand Russell (1872-1970), that scientific philosophy required analysing many objects of belief as logical constructions or logical fictions, and the programme of analysis that this inaugurated dominated the subsequent philosophy of logical atomism, and then of other philosophers, Russell’s, The Analysis of Mind, the mind itself is treated in a fashion reminiscent of Hume, as no more than the collection of neutral perceptions or sense-data that make up the flux of conscious experience, and that looked at another way that also was to make up the external world (neutral monism), but an Inquiry into Meaning and Truth (1940) represents a more empirical approachment, such that to come or go near or nearer to the problem. Yet, for those philosophers to have recurrently investigated this and related distinctions using a varying defusion in terminological phrasing.

Distinction in our ways of knowing things, highlighted by Russell and forming a central element in his philosophy after the discovery of the theory of definite descriptions. A thing is known by acquaintance when there is direct experience of it. It is known by description if it can only be described as a thing with such-and-such properties. In everyday parlance, I might know my spouse and children by acquaintance, but know someone as the first person born at sea only by description. However, for a variety of reasons Russell shrinks the area of things that can be known by acquaintance until eventually only current experience, perhaps my own self, and certain universals or meanings qualify anything else is known only as the thing that has such-and-such qualities.

Because one can interpret the relation of acquaintance or awareness as one that is not epistemic, i.e., not a kind of propositional knowledge, it is important to distinguish the above aforementioned views read as ontological theses from a view one might call epistemological direct realism? In perception we are, on at least some occasions, non-inferentially justified in believing a proposition asserting the existence of a physical object. Since it is that these objects exist independently of any mind that might perceive them, and so it thereby rules out all forms of idealism and phenomenalism, which hold that there are no such independently existing objects. Its being to direct realism rules out those views defended under the cubic of critical naive realism, or representational realism, in which there is some nonphysical intermediary -usually called a sense-datum or a sense impression -that must first be perceived or experienced in order to perceive the object that exists independently of this perception. Often the distinction between direct realism and other theories of perception is explained more fully in terms of what is immediately perceived, than mediately perceived. What relevance does illusion have for these two forms of direct realism?

The fundamental premise of the arguments is from illusion seems to be the theses that things can appear to be other than they are. Thus, for example, straight sticks when immerged in water looks bent, a penny when viewed from certain perspective appears elliptically illusionary disk shaped, when something that is yellow when place under red fluorescent light looks red. In all of these cases, one version of the argument goes, it is implausible to maintain that what we are directly acquainted with is the real nature of the object in question. Indeed, it is hard to see how we can be said to be aware of the really physical object at all. In the above illusions the things we were aware of actually were bent, elliptical and red, respectively. But, by hypothesis, the really physical objects lacked these properties. Thus, we were not aware of the substantial reality of been real as some physical objects or theory.

So far, if the argument is relevant to any of the direct realises distinguished above, it seems relevant only to the claim that in all sense experience we are directly acquainted with parts or constituents of physical objects. After all, even if in illusion we are not acquainted with physical objects, but their surfaces, or their constituents, why should we conclude anything about the hidden nature of our relations to the physical world in veridical experience?

We are supposed to discover the answer to this question by noticing the similarities between illusory experience and veridical experience and by reflecting on what makes illusion possible at all. Illusion can occur because the nature of the illusory experience is determined, not just by the nature of events or sorted, conflicting affairs but the object perceived as itself the event in cause, but also by other conditions, both external and internal as becoming of an inner or as the outer experience. But all of our sensations are subject to these causal influences and it would be gratuitous and arbitrary to select from indefinitely of many and subtly different perceptual experiences some special ones those that get us in touch with the real nature of the physical world and its surrounding surfaces. Red fluorescent light affects the way thing’s look, but so does sunlight. Water reflects light, but so does air. We have no unmediated access to the external world.

The Philosophy of science, and scientific epistemology are not the only area where philosophers have lately urged the relevance of neuroscientific discoveries. Kathleen Akins argues that a traditional view of the senses underlies the variety of sophisticated naturalistic programs about intentionality. Current neuroscientific understanding of the mechanisms and coding strategies implemented by sensory receptors shows that this traditional view is mistaken. The traditional view holds that sensory systems are veridical in at least three ways. (1) Each signal in the system correlates along with diminutive ranging properties in the external (to the body) environment. (2) The structure in the relevant relations between the external properties the receptors are sensitive to is preserved in the structure of the relations between the resulting sensory states, and (3) the sensory system theory, is not properly a single theory, but any approach to a complicated or complex structure that abstract away from the particular physical, chemical or biological nature of its components and simply considers the structure they together administer the terms of the functional role of individual parts and their contribution to the functioning of the whole, without fabricated additions or embellishments, that this is an external event. Using recent neurobiological discoveries about response properties of thermal receptors in the skin as an illustration, are, here, conversely acceptable of sensory systems from which are narcissistic than veridical. All three traditional assumptions are violated. These neurobiological details and their philosophical implications open novel questions for the philosophy of perception and for the appropriate foundations for naturalistic projects about intentionality. Armed with the known neurophysiology of sensory receptors, for example, our philosophy of perception or of perceptual intentionality will no longer focus on the search for correlations between states of sensory systems and veridically detected external properties. This traditionally philosophical (and scientific) project rests upon a mistaken veridical view of the senses. Neurophysiological constructs allow for the knowledge of sensory receptors to actively show that sensory experience does not serve the naturalist as well as a simple paradigm case of intentional relations between representation and the world. Once again, available scientific detail shows the naivety of some traditional philosophical projects.

Focussing on the anatomy and physiology of the pain transmission system, Valerie Hardcastle (1997) urges a similar negative implication for a popular methodological assumption. Pain experiences have long been philosophers favourite cases for analysis and theorizing about conscious experience generally. Nevertheless, every position about pain experiences has been defended recently: eliminativist, a variety of objectivists view, relational views, and subjectivist views. Why so little agreement, despite agreement that pain experience is the place to start an analysis or theory of consciousness? Hardcastle urges two answers. First, philosophers tend to be uninformed about the neuronal complexity of our pain transmission systems, and build their analyses or theories on the outcome of a single component of a multi-component system. Second, even those who understand some of the underlying neurobiology of pain tends to advocate gate-control theories. But the best existing gate-control theories are vague about the neural mechanisms of the gates. Hardcastle instead proposes a dissociable dual system of pain transmission, consisting of a pain sensory system closely analogous in its neurobiological implementation to other sensory systems, and a descending pain inhibitory system. She argues that this dual system is consistent with recent neuroscientific discoveries and accounts for all the pain phenomena that have tempted philosophers toward particular (but limited) theories of pain experience. The neurobiological uniqueness of the pain inhibitory system, contrasted with the mechanisms of other sensory modalities, renders pain processing atypical. In particular, the pain inhibitory system dissociates pains sensation from stimulation of nociceptors (pain receptors). Hardcastle concludes from the neurobiological uniqueness of pain transmission that pain experiences are atypical conscious events, and hence not a good place to start theorizing about or analysing the general type.

Developing and defending theories of content is a central topic in current philosophy of mind. A common desideratum in this debate is a theory of cognitive representation consistent with a physical or naturalistic ontology. Here, described are a few contributions neurophilosophers have made to this literature.

When one perceives or remembers that he is out of coffee, his brain state possesses intentionality or ‘aboutness’. The percept or memory is about ones being out of coffee, and it represents one for being out of coffee. The representational state has content. Some psychosemantics seek to explain what it is for a representational state to be about something: to provide an account of how states and events can have specific representational content. Some physicalist psychosemantics seek to do this using resources of the physical sciences exclusively. Neurophilosophers have contributed to two types of physicalist psychosemantics: the Functional Role approach and the Informational approach.

The nucleus of functional roles of semantics holds that a representation has its content in virtue of relations it bears to other representations. Its paradigm application is to concepts of truth-functional logic, like the conjunctive and disjunctive or, a physical event instantiates the function as justly the case that it maps two true inputs onto a single true output. Thus an expression bears the relations to others that give it the semantic content of and, proponents of functional role semantics propose similar analyses for the content of all representations (Form 1986). A physical event represents birds, for example, if it bears the right relations to events representing feathers and others representing beaks. By contrast, informational semantics associates content to a state depending upon the causal relations obtaining between the state and the object it represents. A physical state represents birds, for example, just in case an appropriate causal relation obtains between it and birds. At the heart of informational semantics is a causal account of information. Red spots on a face carry the information that one has measles because the red spots are caused by the measles virus. A common criticism of informational semantics holds that mere causal covariation is insufficient for representation, since information (in the causal sense) is by definition, always veridical while representations can misrepresent. A popular solution to this challenge invokes a teleological analysis of function. A brain state represents X by virtue of having the function of carrying information about being caused by X (Dretske 1988). These two approaches do not exhaust the popular options for some psychosemantics, but are the ones to which neurophilosophers have contributed.

Jerry Fodor and Ernest LePore raise an important challenge to Churchlands psychosemantics. Location in a state space alone seems insufficient to fix representational states endorsed by content. Churchland never explains why a point in a three-dimensional state space represents the Collor, as opposed to any other quality, object, or event that varies along three dimensions. Churchlands account achieves its explanatory power by the interpretation imposed on the dimensions. Fodor and LePore allege that Churchland never specifies how a dimension comes to represent, e.g., degree of saltiness, as opposed to yellow-blue wavelength opposition. One obvious answer appeals to the stimuli that form the external inputs to the neural network in question. Then, for example, the individuating conditions on neural representations of colours are that opponent processing neurons receive input from a specific class of photoreceptors. The latter in turn have electromagnetic radiation (of a specific portion of the visible spectrum) as their activating stimuli. Nonetheless, this appeal to exterior impulsions as the ultimate stimulus that included individual conditions for representational content and context, for which makes the resulting approaches of an interpretation implied by the versionable information to semantics. If, not only, from which this approach is accordantly supported with other neurobiological inferences.

The neurobiological paradigm for informational semantics is the feature detector: One or more neurons that are (I) maximally responsive to a particular type of stimulus, and (ii) have the function of indicating the presence of that stimulus type. Examples of such stimulus-types for visual feature detectors include high-contrast edges, motion direction, and colours. A favourite feature detector among philosophers is the alleged fly detector in the frog. Lettvin et al. (1959) identified cells in the frog retina that responded maximally to small shapes moving across the visual field. The idea that this cell's activity functioned to detect flies rested upon knowledge of the frogs' diet. Using experimental techniques ranging from single-cell recording to sophisticated functional imaging, neuroscientists have recently discovered a host of neurons that are maximally responsive to a variety of stimuli. However, establishing condition (ii) on a feature detector is much more difficult. Even some paradigm examples have been called into question. David Hubel and Torsten Wiesels (1962) Nobel Prize adherents, who strove to establish the receptive fields of neurons in striate cortices were often interpreted as revealing cells manouevre with those that function continued of their detection, however, Lehky and Sejnowski (1988) have challenged this interpretation. They trained an artificial neural network to distinguish the three-dimensional shape and orientation of an object from its two-dimensional shading pattern. Their network incorporates many features of visual neurophysiology. Nodes in the trained network turned out to be maximally responsive to edge contrasts, but did not appear to have the function of edge detection.

Kathleen Akins (1996) offers a different neurophilosophical challenge to informational semantics and its affiliated feature-detection view of sensory representation. We saw in the previous section how Akins argues that the physiology of thermoreceptor violates three necessary conditions on veridical representation. From this fact she draws doubts about looking for feature detecting neurons to ground some psychosemantics generally, including thought contents. Human thoughts about flies, for example, are sensitive to numerical distinctions between particular flies and the particular locations they can occupy. But the ends of frog nutrition are well served without a representational system sensitive to such ontological refinements. Whether a fly seen now is numerically identical to one seen a moment ago, need not, and perhaps cannot, figure into the frogs feature detection repertoire. Akins critique casts doubt on whether details of sensory transduction will scale up to encompass of some adequately unified psychosemantics. It also raises new questions for human intentionality. How do we get from activity patterns in narcissistic sensory receptors, keyed not to objective environmental features but rather only to effects of the stimuli on the patch of tissue enervated, to the human ontology replete with enduring objects with stable configurations of properties and relations, types and their tokens (as the fly-thought example presented above reveals), and the rest? And how did the development of a stable, and rich ontology confer survival advantages to human ancestors?

Consciousness has reemerged as a topic in philosophy of mind and the cognition and attitudinal values over the past three decades. Instead of ignoring it, many physicalists now seek to explain it (Dennett, 1991). Here we focus exclusively on ways those neuroscientific discoveries have impacted philosophical debates about the nature of consciousness and its relation to physical mechanisms. Thomas Nagel (1937—), argues that conscious experience is subjective, and thus permanently recalcitrant to objective scientific understanding. He invites us to ponder what it is like to be a bat and urges the intuition that no amount of physical-scientific knowledge (including neuroscientific) supplies a complete answer. Nagels work is centrally concerned with the nature of moral motivation and the possibility of as rational theory of moral and political commitment, and has been a major impetus of interests in realistic and Kantian approaches to these issues. The modern philosophy of mind has been his 'What is it Like to Be a Bat? , Arguing that there is an irreducible subjective aspect of experience that cannot be grasped by the objective methods of natural science, or by philosophies such as functionalism that confine themselves to those methods, as the intuition pump up has generated extensive philosophical discussion. At least two well-known replies make direct appeal to neurophysiology. John Biro suggests that part of the intuition pumped by Nagel, that bat experience is substantially different from human experience, presupposes systematic relations between physiology and phenomenology. Kathleen Akins (1993) delves deeper into existing knowledge of bat physiology and reports much that is pertinent to Nagels question. She argues that many of the questions about subjectivity that we still consider open hinge on questions that remain unanswered about neuroscientific details.

The more recent philosopher David Chalmers (1996), has argued that any possible brain-process account of consciousness will leave open an explanatory gap between the brain process and properties of the conscious experience. This is because no brain-process theory can answer the hard question: Why should that particular brain process give rise to conscious experience? We can always imagine (conceive of) a universe populated by creatures having those brain processes but completely lacking conscious experience. A theory of consciousness requires an explanation of how and why some brain process causes consciousness replete with all the features we commonly experience. The fact that the more difficult of questions remains unanswered implicates that we will probably never get to culminate of an explanation of consciousness, in that, at the level of neural compliance. Paul and Patricia Churchland have recently offered the following diagnosis and reply. Chalmers offer a conceptual argument, based on our ability to imagine creatures possessing brains like ours but wholly lacking in conscious experience. But the more one learns about how the brain produces conscious experience-and literature is beginning to emerge (e.g., Gazzaniga, 1995) - the harder it becomes to imagine a universe consisting of creatures with brain processes like ours but lacking consciousness. This is not just to bare assertions. The Churchlands appeal to some neurobiological detail. For example, Paul Churchland (1995) develops a neuroscientific account of consciousness based on recurrent connections between thalamic nuclei (particularly diffusely projecting nuclei like the intralaminar nuclei) and the cortex. Churchland argues that the thalamocortical recurrency accounts for the selective features of consciousness, for the effects of short-term memory on conscious experience, for vivid dreaming during REM. (rapid-eye movement) sleep, and other core features of conscious experience. In other words, the Churchlands are claiming that when one learns about activity patterns in these recurrent circuits, one can't imagine or conceive of this activity occurring without these core features of conscious experience. (Other than just mouthing the words, I am now imagining activity in these circuits without selective attention/the effects of short-term memory/vivid dreaming . . . )

A second focus of sceptical arguments about a complete neuroscientific explanation of consciousness is sensory qualia: The introspectable qualitative aspects of sensory experience, the features by which subjects discern similarities and differences among their experiences. The colours of visual sensations are a philosopher's favourite example. One famous puzzle about colour qualia is the alleged conceivability of spectral inversions. Many philosophers claim that it is conceptually possible (if perhaps physically impossible) for two humans not to diverge apart of similarities, but such are the compatibles as forwarded by their differing enation to neurophysiology. While the colour that fires engines and tomatoes appear to have of only one subject, is the colour that grasses and frogs appear in having the other (and vice versa). A large amount of neurophysiologically informed philosophy has addressed this question. A related area where neurophilosophical considerations have emerged concerns the metaphysics of colours themselves (rather than Collor experiences). A longstanding philosophical dispute is whether colours are objective properties’ Existing external to perceiver or rather identifiable as or dependent upon minds or nervous systems. Some recent work on this problem begins with characteristics of Collor experiences: For example that Collor similarity judgments produce Collor orderings that align on a circle. With this resource, one can seek mappings of phenomenology onto environmental or physiological regularities. Identifying colours with particular frequencies of electromagnetic radiation does not preserve the structure of the hue circle, whereas identifying colours with activity in opponent processing neurons does. Such a tidbit is not decisive for the Collor objectivist-subjectivist debate, but it does convey the type of neurophilosophical work being done on traditional metaphysical issues beyond the philosophy of mind.

We saw in the discussion of Hardcastle (1997) two sections above that Neurophilosophers have entered disputes about the nature and methodological import of pain experiences. Two decades earlier, Dan Dennett (1978) took up the question of whether it is possible to build a computer that feels pain. He compares and notes the strong move between neurophysiological discoveries and common sense intuitions about pain experience. He suspects that the incommensurability between scientific and common sense views is due to incoherence in the latter. His attitude is wait-and-see. But foreshadowing Churchlands reply to Chalmers, Dennett favours scientific investigations over conceivability-based philosophical arguments.

Neurological deficits have attracted philosophical interest. For thirty years philosophers have found implications for the unity of the self in experiments with commissurotomy patients. In carefully controlled experiments, commissurotomy patients display two dissociable seats of consciousness. Patricia Churchland scouts philosophical implications of a variety of neurological deficits. One deficit is blindsight. Some patients with lesions to primary visual cortex report being unable to see items in regions of their visual fields, yet perform far better than chance in forced guess trials about stimuli in those regions. A variety of scientific and philosophical interpretations have been offered. Need Form (1988) worries that many of these conflate distinct notions of consciousness? He labels these notions phenomenal consciousness (P-consciousness) and accesses consciousness (A-consciousness). The former is that which, what it is in like-ness of or to the undergoing of experience. The latter are the availability of representational content to self-initiated action and speech. Form argues that P-consciousness is not always representational whereas A-consciousness is. Dennett and Michael Tye are sceptical of non-representational analyses of consciousness in general. They provide accounts of blindsight that do not depend on Forms distinction.

Many other topics are worth neurophilosophical pursuit. We mentioned commissurotomy and the unity of consciousness and the self, which continues to generate discussion. The qualia, is afar and above as beyond those of colour and pain, have comprised to bring into being by mental and especially artistic effort or one’s emotions placed under control as to face the new attraction to neurophilosophical attentions which have self-consciousness. The first issues to arise in the philosophy of neuroscience (before there was a recognized area) were the localization of cognitive functions to specific neural regions. Although the localization approach had dubious origins in the phrenology of Gall and Spurzheim, and was challenged severely by Flourens throughout the early nineteenth century, it reemerged in the study of aphasia by Bouillaud, Auburtin, Broca, and Wernicke. These neurologists made careful studies (where possible) of linguistic deficits in their aphasic patients followed by brain autopsies postmortem. Broca initial study of twenty-two patients in the mid-nineteenth century confirmed that damage to the left cortical hemisphere was predominant, and that damage to the second and third frontal convolutions was necessary to produce speech production deficits. Although the anatomical coordinate’s Broca postulates for the speech production centres do not correlate exactly with damage producing production deficits as both are in this area of frontal cortexes and speech production requires of some greater degree of composure, in at least, that still bears his name (Broca area and Broca aphasia). Less than two decades later Carl Wernicke published evidence for a second language Centre. This area is anatomically distinct from Broca area, and damage to it produced a very different set of aphasic symptoms. The cortical area that still bears his name (Wernickes area) is located around the first and second convolutions in temporal cortex, and the aphasia that bear his name (Wernickes aphasia) involves deficits in language comprehension. Wernickes method, like Broca, was based on lesion studies: a careful evaluation of the behavioural deficits followed by post mortem examination to find the sites of tissue damage and atrophy. Lesion studies suggesting more precise localization of specific linguistic functions remain the groundwork of a strengthening foundation to which supports all while it remains in tack to this day in unarticulated research

Lesion studies have also produced evidence for the localization of other cognitive functions: for example, sensory processing and certain types of learning and memory. However, localization arguments for these other functions invariably include studies using animal models. With an animal model, one can perform careful behavioural measures in highly controlled settings, then ablate specific areas of neural tissue (or use a variety of other techniques to Form or enhance activity in these areas) and remeasure performance on the same behavioural tests. But since we lack an animal model for (human) language production and comprehension, this additional evidence is not available to the neurologist or neurolinguist. This fact makes the study of language a paradigm case for evaluating the logic of the lesion/deficit method of inferring functional localization. Philosopher Barbara Von Eckardt (1978) attempts to make explicitly the steps of reasoning involved in this common and historically important method. Her analysis begins with Robert Cummins early analysis of functional explanation, but she extends it into a notion of structurally adequate functional analysis. These analyses break down a complex capacity C into its constituent capacities 1, C2, . . . Cn, where the constituent capacities are consistent with the underlying structural details of the system. For example, human speech production (complex capacity C) results from formulating a speech intention, then selecting appropriate linguistic representations to capture the content of the speech intention, then formulating the motor commands to produce the appropriate sounds, then communicating these motor commands to the appropriate motor pathways (constituent capacity’s C1, C2, . . . , Cn). A functional-localization hypothesis has the form: Brain structure S in an organism (type) O has constituent capacity ci, where ci is a function of some part of O. An example, Brains Broca area (S) in humans (O) formulates motor commands to produce the appropriate sounds (one of the constituent capacities ci). Such hypotheses specify aspects of the structural realization of a functional-component model. They are part of the theory of the neural realization of the functional model.

Armed with these characterizations, Von Eckardt argues that inference to some functional-localization hypothesis proceeds in two steps. First, a functional deficit in a patient is hypothesized based on the abnormal behaviour the patient exhibits. Second, localization of function in normal brains is inferred on the basis of the functional deficit hypothesis plus the evidence about the site of brain damage. The structurally-adequate functional analysis of the capacity connects the pathological behaviour to the hypothesized functional deficit. This connexion suggests four adequacy conditions on a functional deficit hypothesis. First, the pathological behaviour P (e.g., the speech deficits characteristic of Broca aphasia) must result from failing to exercise some complex capacity C (human speech production). Second, there must be a structurally-adequate functional analysis of how people exercise capacity C that involves some constituent capacity ci (formulating motor commands to produce the appropriate sounds). Third, the operation of the steps described by the structurally-adequate functional analysis minus the operation of the component performing ci (Broca area) must result in pathological behaviour P. Fourth, there must not be a better available explanation for why the patient does P. Arguments to a functional deficit hypothesis on the basis of pathological behaviour is thus an instance of argument to the best available explanation. When postulating a deficit in a normal functional component provides the best available explanation of the pathological data, we are justified in drawing the inference.

Von Eckardt applies this analysis to a neurological case study involving a controversial reinterpretation of agnosia. Her philosophical explication of this important neurological method reveals that most challenges to localization arguments of whether to argue only against the localization of a particular type of functional capacity or against generalizing from localization of function in one individual to all normal individuals. (She presents examples of each from the neurological literature.) Such challenges do not impugn the validity of standard arguments for functional localization from deficits. It does not follow that such arguments are unproblematic. But they face difficult factual and methodological problems, not logical ones. Furthermore, the analysis of these arguments as involving a type of functional analysis and inference to the best available explanation carries an important implication for the biological study of cognitive function. Functional analyses require functional theories, and structurally adequate functional analyses require checks imposed by the lower level sciences investigating the underlying physical mechanisms. Arguments to best available explanation are often hampered by a lack of theoretical imagination: the available explanations are often severely limited. We must seek theoretical inspiration from any level of theory and explanation. Hence making explicitly the logic of this common and historically important form of neurological explanation reveals the necessity of joint participation from all scientific levels, from cognitive psychology down to molecular neuroscience. Von Eckardt anticipated what came to be heralded as the co-evolutionary research methodology, which remains a centerpiece of neurophilosophy to the present day.

Over the last two decades, evidence for localization of cognitive function has come increasingly from a new source: the development and refinement of neuroimaging techniques. The form of localization-of-function argument appears not to have changed from that employing lesion studies (as analysed by Von Eckardt). Instead, these imaging technologies resolve some of the methodological problems that plage lesion studies. For example, researchers do not need to wait until the patient dies, and in the meantime probably acquires additional brain damage, to find the lesion sites. Two functional imaging techniques are prominent: Positron emission tomography, or PET, and functional magnetic resonance imaging, or MRI. Although these measure different biological markers of functional activity, both now have a resolution down too around one millimetre. As these techniques increase spatial and temporal resolution of functional markers and continue to be used with sophisticated behavioural methodologies, the possibility of localizing specific psychological functions to increasingly specific neural regions continues to grow



What we now know about the cellular and molecular mechanisms of neural conductance and transmission is spectacular. The same evaluation holds for all levels of explanation and theory about the mind/brain: maps, networks, systems, and behaviour. This is a natural outcome of increasing scientific specialization. We develop the technology, the experimental techniques, and the theoretical frameworks within specific disciplines to push forward our understanding. Still, a crucial aspect of the total picture gets neglected: the relationships between the levels, the glue that binds knowledge of neuron activity to subcellular and molecular mechanisms, network activity patterns to the activity of and connectivity between single neurons, and behavioural network activity. This problem is especially glaring when we focus on the relationship between cognitivist psychological theories, postulating information-bearing representations and processes operating over their contents, and the activity patterns in networks of neurons. Co-evolution between explanatory levels still seems more like a distant dream rather than an operative methodology.

It is here that some neuroscientists appeal to computational methods. If we examine the way that computational models function in more developed sciences (like physics), we find the resources of dynamical systems constantly employed. Global effects (such as large-scale meteorological patterns) are explained in terms of the interaction of local lower-level physical phenomena, but only by dynamical, nonlinear, and often chaotic sequences and combinations. Addressing the interlocking levels of theory and explanation in the mind/brain-using computational resources that have worked to bridge levels in more mature sciences might yield comparable results. This methodology is necessarily interdisciplinary, drawing on resources and researchers from a variety of levels, including higher levels like experimental psychology, program-writing and connectionist artificial intelligence, and philosophy of science.

However, the use of computational methods in neuroscience is not new. Hodgkin, Huxley, and Katz incorporated values of voltage-dependent potassium conductance they had measured experimentally in the squid giant axon into an equation from physics describing the time evolution of a first-order kinetic process. This equation enabled them to calculate best-fit curves for modelled conductance versus time data that reproduced the S-shaped (sigmoidal) function suggested by their experimental data. Using equations borrowed from physics, Rall (1959) developed the cable model of dendrites. This theory provided an account of how the various inputs from across the dendritic tree interact temporally and spatially to determine the input-output properties of single neurons. It remains influential today, and has been incorporated into the genesis software for programming neurally realistic networks. More recently, David Sparks and his colleagues have shown that a vector-averaging model of activity in neurons of correctly predicts experimental results about the amplitude and direction of saccadic eye movements. Working with a more sophisticated mathematical model, Apostolos Georgopoulos and his colleagues have predicted direction and amplitude of hand and arm movements based on averaged activity of 224 cells in motor cortices. Their predictions have borne out under a variety of experimental tests. We mention these particular studies only because we are familiar with them. We could multiply examples of the fruitful interaction of computational and experimental methods in neuroscience easily by one-hundred-fold. Many of these extend back before computational neuroscience was a recognized research endeavour.

We've already seen one example, the vector transformation accounts, of neural representation and computation, under active development in cognitive neuroscience. Other approaches using cognitivist resources are also being pursued. Many of these projects draw upon cognitivist characterizations of the phenomena to be explained. Many exploit cognitivist experimental techniques and methodologies, but, yet, some even attempt to derive cognitivist explanations from cell-biological processes (e.g., Hawkins and Kandel 1984). As Stephen Kosslyn puts it, cognitive neuroscientists employ the information processing view of the mind characteristic of cognitivism without trying to separate it from theories of brain mechanisms. Such an endeavour calls for an interdisciplinary community willing to communicate the relevant portions of the mountain of detail gathered in individual disciplines with interested nonspecialists: not just people willing to confer with those working at related levels, but researchers trained in the methods and factual details of a variety of levels. This is a daunting requirement, but it does offer some hope for philosophers wishing to contribute to future neuroscience. Thinkers trained in both the synoptic vision afforded by philosophy and the factual and experimental basis of genuine graduate-level science would be ideally equipped for this task. Recognition of this potential niche has been slow among graduate programs in philosophy, but there is some hope that a few programs are taking steps to fill it.

In the final analysis there will be philosophers unprepared to accept that, if a given cognitive capacity is psychologically real, then there must be an explanation of how it is possible for an individual in the course of human development to acquire that cognitive capacity, or anything like it, can have a role to play in philosophical accounts of concepts and conceptual abilities. The most obvious basis for such a view would be a Frégean distrust of psychology that leads to a rigid division of labour between philosophy and psychology. The operative thought is that the task of a philosophical theory of concepts is to explain what a given concept is or what a given conceptual ability consist in. This, it is frequently maintained, is something that can be done in complete independence of explaining how such a concept or ability might be acquired. The underlying distinction is one between philosophical questions cantering around concept possession and psychological questions cantering around concept possibilities for an individual to acquire that ability, then it cannot be psychologically real. Nevertheless, this distinction is strictly one that agrees in the adherence to the distinction, it provides no support for a rejection of any given cognitive capacity for which is psychologically real. The neo-Frégean distinction is directly against the view that facts about how concepts are acquired have a role to play in explaining and individualizing concepts. But this view does not have to be disputed by a supporter as such, nonetheless, all that the supporter is to commit is that the principle that no satisfactory account of what a concept is should make it impossible to provide explanation of how that concept can be acquired. That is, that this principle has nothing to say about the further question of whether the psychological explanation has a role to play in a constitutive explanation of the concept, and hence is not in conflict with the neo-Frégean distinction.

A full account of the structure of consciousness, will employ a pressing opportunity or requirements to provide that to illustrate those higher conceptual representations as given to forms of consciousness, to which little attention on such an account will take and about how it might emerge from given points of value, is the thought that an explanation of everything that is distinctive about consciousness will emerge out of an accorded advantage over and above of what it is for the subject, to be capable of thinking about himself. Nonetheless, to appropriate a convenient employment with an applicable understanding of the complicated and complex phenomenon of consciousness, however, ours is to challenge the arousing objectionable character as attributed by the attractions of an out-and-out form of consciousness. Seeming to be the most basic of facts confronting us, yet, it is almost impossible to say what consciousness is. Whenever complicated and complex biological and neural processes go on between the cranial walls of existent vertebrae, as it is my consciousness that provides the medium, though which my consciousness provides the awakening flame of awareness which enables me to think, and if there is no thinking, there is no sense of consciousness. Which their existence the possibility to envisage the entire moral and political framework constructed to position of ones idea of interactions to hold a person rationally approved, although the development of requirement needed of the motivational view as well as the knowledge for which is rationality and situational of the agent.

Meanwhile, whatever complex biological and neural processes go on within the mind, it is my consciousness that provides the awakening awarenesses, whereby my experiences and thoughts have their existence, where my desires are felt and where my intentions are formed. But then how am I to expound upon the I-ness of me or myself that the self is the spectator, or at any rate the owner of this afforded effort as spoken through the strength of the imagination, that these problems together make up what is sometimes called the hard problem of consciousness. One of the difficulties is thinking about consciousness is that the problems seem not to be scientific ones, as the German philosopher, mathematician and polymath Gottfried Leibniz (1646-1716), remarked that if we could construct a machine that could think and feel and then blow it up to the size of a football field and thus be able to examine its working parts as thoroughly as we pleased, would still not find consciousness. And finally, drew to some conclusion that consciousness resides in simple subjects, not complex ones. Even if we are convinced that consciousness somehow emerges from the complexity of the brain functioning, we may still feel baffled about the ways that emergencies’ takes place, or it takes place in just the way it does. Seemingly, to expect of a prime necessity as, perhaps, be for ones own personal expectations, even so, to expect of expectation necessitates the need of opposites, such that there are no positivities or any alliances given to expect, however, to accept of the doubts that are none, so that expectations are forerunners from which of expecting is to expect of expectation and should be nullified. Descartes deceptions of the senses are nothing but a clear orientation of something beyond expectation, indeed.

There are no facts about linguistic mastery that will determine or explain what might be termed the cognitive dynamics that are individual processes that have found their way forward for a theory of consciousness, it sees, to chart the characteristic features individualizing the various distinct conceptual forms of consciousness in a way that will provide a taxonomy of unconsciousness is to show how this actualization is the characterlogical contribution of functional dynamic determinations, that, if, not at least, at the level of contentual representation. What is hoping is now clear is that these forms of higher forms of consciousness emerge from a rich foundation of non-conceptual representations of thought, which can only expose and clarify their conviction that these forms of conscious thought hold the key, not just to an eventful account of how mastery of the conscious paradigms, but to a proper understanding of the plexuity of self-consciousness and/or the overall conjecture of consciousness that stands alone as to an everlasting vanquishment into the endlessness of unchangeless states of unconsciousness, where its abysses are only held by incestuousness.

Theory itself, is consistent with fact or reality, not false or incorrect, but truthful, it is sincerely felt or expressed unforeignly and so, that it is essential and exacting of several standing rules and senses of governing requirements. As, perhaps, the distress of mind begins its lamination of binding substances through which arises of an intertwined web whereby that within and without the estranging assimilations in sensing the definitive criteria by some limited or restrictive particularities of some possible value as taken by a variable accord with reality. To position of something, as to make it balanced, level or square, that we may think of a proper alignment as something, in so, that one is certain, like trust, another derivation of the same appears on the name is etymologically, or strong seers. Conformity of fact or the actuality of a statement as been or accepted as true to an original or standard set class theory from which it is considered as the supreme reality and to have the ultimate meaning, and value of existence. It is, nonetheless, a compound position, such as a conjunction or negation, the truth-values have always determined whose truth-values of that component thesis.

Moreover, science, unswerving exactly to position of something very well hidden, its nature in so that to make it believed, is quickly and imposes on sensing and responding to the definitive qualities or state of being actual or true, such that as a person, an entity, or an event, that might be gainfully employed of all things possessing actuality, existence, or essence. In other words, in that which is objectively inside and out, and in addition it seems to appropriate that of reality, in fact, to the satisfying factions of instinctual needs through the awarenesses of and adjustments abided to environmental demands. Thus, the enabling acceptation of a presence that to prove the duties or function of such that the act or part thereof, that something done or effected presents upon our understanding or plainly the condition of truth which is seen for being realized, and the resultant amounts to the remnant retrogressions that are also, undoubtingly realized.

However, a declaration made to explain or justify action, or its believing desire upon which it is to act, by which the conviction underlying facts or cause, that provide logical sense for a premise or occurrence for logical, rational. Analytic mental states have long since lost in reason, but, yet, the premise usually takes upon the minor premises of an argument, using this faculty of reason that arises too throughout the spoken exchange or a debative discussion, and, of course, spoken in a dialectic way. To determining or conclusively logical impounded by thinking through its directorial solution to the problem, would therefore persuade or dissuade someone with reason that posits of itself with the good sense or justification of reasonability. In which, good causes are simply justifiably to be considered as to think. By which humans seek or attain knowledge or truth. Mere reason is insufficient to convince us of its veracity. Still, comprehension perceptively welcomes an intuitively given certainty, as the truth or fact, without the use of the rational process, as one comes to assessing someone's character, it sublimely configures one consideration, and often with resulting comprehensions, in which it is assessing situations or circumstances and draw sound conclusions into the reign of judgement.

Operatively, that by being in accorded with reason or, perhaps, of sound thinking, that the discovery made, is by some reasonable solution that may or may not resolve the problem, that being without the encased enclosure that bounds common sense from arriving to some practicality, especially if using reason, would posit the formed conclusions, in that of inferences or judgements. In that, all evidential alternates of a confronting argument within the use in thinking or thought out responses to issuing the furthering argumentation to fit or join in the sum parts that are composite to the intellectual faculties, by which case human understanding or the attemptive grasp to its thought, are the resulting liberty encroaching men of zeal, well-meaningly, but without understanding.

Being or occurring in fact or having to some verifiable existence, real objects, and a real illness. . . .Really true and actual and not imaginary, alleged, or ideal, as people and not ghosts, from which are we to find on practical matters and concerns of experiencing the real world. The surrounding surfaces, might we, as, perhaps attest to this for the first time. Being no less than what they state, we have not taken its free pretence, or affections for a real experience highly, as many may encounter real trouble. This, nonetheless, projects of an existing objectivity in which the world despite subjectivity or conventions of thought or language is or have valuing representation, reckoned by actual power, in that of relating to, or being an image formed by light or another identifiable simulation, that converge in space, the stationary or fixed properties, such as a thing or whole having actual existence. All of which, are accorded a truly factual experience into which the actual attestations have brought to you by the afforded efforts of our very own imaginations.

Ideally, in theory the imagination, a concept of reason that is transcendent but non-empirical, as to think, such that the conceptionist is in or of an ideal thought. That potentially or actuality proves eligible within the acceptable qualifications as to have been or worthy of being chosen of having independent reality that has recently come into existence. The state or fact, for which the quality of being actual is the realm of distinct phenomenons that in the mind as a product exclusive to the mental act had initially attended those of the elements in one’s being who feels, perceives thinks, wills, and especially reasons are that with which have been existing in possibilities. In the philosophy of Plato, an archetype of which a corresponding being in phenomenal reality is an imperfect replica, that also, Hegels absolute truth, as the conception and ultimate product of reason (the absolute meaning a mental image of something remembered).

Conceivably, in the imagination the formation of a mental image of something that is or should be b perceived as real nor present to the senses. Nevertheless, the image so formed can confront and deal with the reality by using the creative powers of the mind. That is characteristically well removed from reality, but all powers of fantasy over reason are a degree of insanity/ still, fancy as they have given a product of the imagination free reins, that is in command of the fantasy while it is exactly the mark of the neurotic that his very own fantasy possesses him.

All things possessing actuality, existence or essence that exists objectively and in fact based on real occurrences that exist or known to have existed, a real occurrence, an event, i.e., had to prove the facts of the case, as something believed to be true or real, determining by evidence or truth as to do. However, the usage in the sense allegation of fact, and the reasoning are wrong of the facts and substantive facts, as we may never know the facts of the case. These usages may occasion qualms among critics who insist that facts can only be true, but the usages are often useful for emphasis. Therefore, we have related to, or used the discovery or determinations of fast or accurate information in the discovery of facts, then evidence has determined the comprising events or truth is much as ado about their owing actuality. Its opposition forming the literature that treats real people or events as if they were fictional or uses real people or events as essential elements in an otherwise fictional rendition, i.e., of, relating to, produced by, or characterized by internal dissension, as given to or promoting internal dissension. So, then, it is produced artificially than by a natural process, especially the lacking authenticity or genuine factitious values of another than what is or of reality should be.

Substantively set statements or principles devised to explain a group of facts or phenomena, especially one that we have tested or is together experiment with and taken for us to conclude and can be put-upon to make predictions about natural phenomena. Having the consistency of explanatory statements, accepted principles, and methods of analysis, finds to a set of theorems that make up a systematic view of a branch in mathematics or extends upon the paradigms of science, the belief or principle that guides action or helps comprehension or judgements, usually by an ascription based on limited information or knowledge, as a conjecture, tenably to assert the creation from a speculative assumption that bestows to its beginning. Theoretically, to, affiliate oneself with to, or based by itself on theory, i.e., the restriction to theory, is not as much a practical theory of physics, as given to speculative theorizing. Also, the given idea, because of which formidable combinations awaiting upon the inception of an idea, demonstrated as true or is given to demonstration. In mathematics its containment lies of the proposition that has been or is to be proved from explicit assumption and is primarily with theoretical assessments or hypothetical theorizing than possibly these might be thoughtful measures and taken as the characteristics by which we measure its quality value?

Looking back, one can see a discovering degree of homogeneity among the philosophers of the early twentieth century about the topics central to their concerns. More striking still, is the apparent profundities and abstrusity of concerns for which appear at first glance to be separated from the discerned debates of previous centuries, between realism and idealist, say, of rationalists and empiricist.

Thus, no matter what the current debate or discussion, the central issue is often without conceptual and contentual representations, that if one is without concept, is without idea, such that in one foul swoop would ingest the mere truth that lies to the underlying paradoxes of why is there something instead of nothing? Whatever it is that makes, what would otherwise be mere utterances and inscriptions into instruments of communication and understanding. This philosophical problem is to demystify this over-flowing emptiness, and to relate to what we know of ourselves and subjective matters resembling reality or ours is to an inherent perceptivity of the world and its surrounding surfaces.

Contributions to this study include the theory of speech arts, and the investigation of communicable communications, especially the relationship between words and ideas, and words and the world. It is, nonetheless, that which and utterance or sentence expresses, the proposition or claim made about the world. By extension, the content of a predicate that any expression effectively connecting with one or more singular terms to make a sentence, the expressed condition that the entities referred to may satisfy, in which case the resulting sentence will be true. Consequently we may think of a predicate as a function from things to sentences or even to truth-values, or other sub-sentential components that contribute to sentences that contain it. The nature of content is the central concern of the philosophy of language.

What some person expresses of a sentence often depends on the environment in which he or she is placed. For example, the disease I refer to by a term like arthritis or the kind of tree I call of its criteria will define a beech of which I know next to nothing. This raises the possibility of imaging two persons as an alternative different environment, but in which everything appears the same to each of them. The wide content of their thoughts and saying will be different if the situation surrounding them is appropriately different, situation may at this point, include the actual objects hey perceive, or the chemical or physical kinds of objects in the world they inhabit, or the history of their words, or the decisions of authorities on what counts as an example of one term thy use. The narrow content is that part of their thought that remains identical, through the identity of the way things appear, despite these differences of surroundings. Partisans of wide, . . . as, something called broadly, content may doubt whether any content is in this sense narrow, partisans of narrow content believe that it is the fundamental notion, with wide content being on narrow content confirming context.

All and all, assuming their rationality has characterized people is common, and the most evident display of our rationality is capable to think. This is the rehearsal in the mind of what to say, or what to do. Not all thinking is verbal, since chess players, composers, and painters all think, and there is no deductive reason that their deliberations should take any more verbal a form than their actions. It is permanently tempting to conceive of this activity about the presence in the mind of elements of some language, or other medium that represents aspects of the world and its surrounding surface structures. However, the model has been attacked, notably by Ludwig Wittgenstein (1889-1951), whose influential application of these ideas was in the philosophy of mind. Wittgenstein explores the role that reports of introspection, or sensations, or intentions, or beliefs can play of our social lives, to undermine the Cartesian mental picture is that they functionally describe the goings-on in an inner theatre of which the subject is the lone spectator. Passages that have subsequentially become known as the rule following considerations and the private language argument are among the fundamental topics of modern philosophy of language and mind, although their precise interpretation is endlessly controversial.

Effectively, the hypotheses especially associated with Jerry Fodor (1935-), whom is known for the resolute realism, about the nature of mental functioning, that occurs in a language different from one’s customary native language, but underlying and explaining our competence with it. The idea is a development of the notion of an innate universal grammar (Avram Noam Chomsky, 1928-), in as such, that we agree that since a computer programs are linguistically complex sets of instructions were the relative executions by which explains of surface behaviour or the adequacy of the computerized programming installations, if it were definably amendable and, advisably corrective, in that most are disconcerting of many that are ultimately a reason for us of thinking intuitively and without the indulgence of retrospective preferences, but an ethical majority in defending of its moral ligne that is already confronting us. That these programs may or may not improve to conditions that are lastly to enhance of the right sort of an existence forwarded toward a more valuing amount in humanities lesser extensions that embrace ones riff of necessity to humanities abeyance to expressions in the finer of qualities.

As an explanation of ordinary language-learning and competence, the hypothesis has not found universal favour, as only ordinary representational powers that by invoking the image of the learning persons capabilities are apparently whom the abilities for translating are contending of an innate language whose own powers are mysteriously a biological given. Perhaps, the view that everyday attributions of intentionality, beliefs, and meaning to other persons proceed by means of a tactic use of a theory that enables one to construct these interpretations as explanations of their doings. We commonly hold the view along with functionalism, according to which psychological states are theoretical entities, identified by the network of their causes and effects. The theory-theory has different implications, depending upon which feature of theories we are stressing. Theories may be thought of as capable of formalization, as yielding predictions and explanations, as achieved by a process of theorizing, as answering to empirical evidence that is in principle describable without them, as liable to be overturned by newer and better theories, and so on.

The main problem with seeing our understanding of others as the outcome of a piece of theorizing is the nonexistence of a medium in which this theory can be couched, as the child learns simultaneously the minds of others and the meaning of terms in its native language, is not gained by the tactic use of a theory, enabling us to infer what thoughts or intentions explain their actions, but by re-living the situation in their shoes or from their point of view, and by that understanding what they experienced and theory, and therefore expressed. Understanding others is achieved when we can ourselves deliberate as they did, and hear their words as if they are our own. The suggestion is a modern development frequently associated in the Verstehen traditions of Dilthey (1833-1911), Weber (1864-1920) and Collingwood (1889-1943).

We may call any process of drawing a conclusion from a set of premises a process of reasoning. If the conclusion concerns what to do, the process is called practical reasoning, otherwise pure or theoretical reasoning. Evidently, such processes may be good or bad, if they are good, the premises support or even entail the conclusion drawn, and if they are bad, the premises offer no support to the conclusion. Formal logic studies the cases in which conclusions are validly drawn from premises, but little human reasoning is overly of the forms logicians identify. Partly, we are concerned to draw conclusions that go beyond our premises, in the way that conclusions of logically valid arguments do not for the process of using evidence to reach a wider conclusion. Nonetheless, such anticipatory pessimism in the opposite direction to the prospects of conformation theory, denying that we can assess the results of abduction in terms of probability. A cognitive process of reasoning in which a conclusion is played-out from a set of premises usually confined of cases in which the conclusions are supposed in following from the premises, i.e., an inference is logically valid, in that of deductibility in a logically defined syntactic premise but without there being to any reference to the intended interpretation of its theory. Furthermore, as we reason we use indefinite traditional knowledge or commonsense sets of presuppositions about what it is likely or not a task of an automated reasoning project, which is to mimic this causal use of knowledge of the way of the world in computer programs.

Some theories usually emerge themselves of engaging to exceptionally explicit predominancy as [ supposed ] truth that they have not organized, making the theory difficult to survey or study as a whole. The axiomatic method is an idea for organizing a theory, one in which tries to select from among the supposed truth a small number from which they can see all others to be deductively inferable. This makes the theory more tractable since, in a sense, they contain all truth in those few. In a theory so organized, they call the few truths from which they deductively imply all others axioms. David Hilbert (1862-1943) had argued that, just as algebraic and differential equations, which we were used to study mathematical and physical processes, could have themselves be made mathematical objects, so axiomatic theories, like algebraic and differential equations, which are means to representing physical processes and mathematical structures could be of investigating.

Conformation to theory, the philosophy of science, is a generalization or set referring to unobservable entities, i.e., atoms, genes, quarks, unconscious wishes. The ideal gas law, for example, refers to such observable pressures, temperature, and volume, the molecular-kinetic theory refers to molecules and their material possession, . . . although an older usage suggests the lack of adequate evidence in support thereof, as an existing philosophical usage does in truth, follow in the tradition (as in Leibniz, 1704), as many philosophers had the conviction that all truth, or all truth about a particular domain, followed from as few than for being many governing principles. These principles were taken to be either metaphysically prior or epistemologically prior or both. In the first sense, they we took to be entities of such a nature that what exists s caused by them. When the principles were taken as epistemologically prior, that is, as axioms, they were taken to be either epistemologically privileged, e.g., self-evident, not needing to be demonstrated, or again, included or, to such that all truths so truly follow from them by deductive inferences. Gödel (1984) showed in the spirit of Hilbert, treating axiomatic theories as themselves mathematical objects that mathematics, and even a small part of mathematics, elementary number theory, could not be axiomatized, that more precisely, any class of axioms that is such that we could effectively decide, of any proposition, whether or not it was in that class, would be too small to capture in of the truth.

The notion of truth occurs with remarkable frequency in our reflections on language, thought and action. We are inclined to suppose, for example, that truth is the proper aim of scientific inquiry, that true beliefs help to achieve our goals, that to understand a sentence is to know which circumstances would make it true, that reliable preservation of truth as one argues of valid reasoning, that moral pronouncements should not be regarded as objectively true, and so on. To assess the plausibility of such theses, and to refine them and to explain why they hold (if they do), we require some view of what truth be a theory that would account for its properties and its relations to other matters. Thus, there can be little prospect of understanding our most important faculties in the sentence of a good theory of truth.

Such a thing, however, has been notoriously elusive. The ancient idea that truth is some sort of correspondence with reality has still never been articulated satisfactorily, and the nature of the alleged correspondence and the alleged reality persistently remains objectionably enigmatical. Yet the familiar alternative suggestions that true beliefs are those that are mutually coherent, or pragmatically useful, or verifiable in suitable conditions has each been confronted with persuasive counterexamples. A twentieth-century departure from these traditional analyses is the view that truth is not a property at all that the syntactic form of the predicate, is true, distorts its really semantic character, which is not to describe propositions but to endorse them. Nevertheless, we have also faced this radical approach with difficulties and suggest, counter intuitively that truth cannot have the vital theoretical role in semantics, epistemology and elsewhere that we are naturally inclined to give it. Thus, truth threatens to remain one of the most enigmatic of notions: An explicit account of it can seem essential yet beyond our reach. All the same, recent work provides some evidence for optimism.

A theory is based in philosophy of science, is a generalization or se of generalizations purportedly referring to observable entities, i.e., atoms, quarks, unconscious wishes, and so on. The ideal gas law, for example, cites to only such observable pressures, temperature, and volume, the molecular-kinetic theory refers top molecules and their properties, although an older usage suggests the lack of an adequate make out in support therefrom as merely a theory, latter-day philosophical usage does not carry that connotation. Einstein's special and General Theory of Relativity, for example, is taken to be extremely well founded.

These are two main views on the nature of theories. According to the received view theories are partially interpreted axiomatic systems, according to the semantic view, a theory is a collection of models (Suppe, 1974). By which, some possibilities, unremarkably emerge as supposed truth that no one has neatly systematized by making theory difficult to make a survey of or study as a whole. The axiomatic method is an ideal for organizing a theory (Hilbert, 1970), one tries to select from among the supposed truth a small number from which they can see all the others to be deductively inferable. This makes the theory more tractable since, in a sense, they contain all truth in those few. In a theory so organized, they call the few truths from which they deductively incriminate all others axioms. David Hilbert (1862-1943) had argued that, morally justified as algebraic and differential equations, which were antiquated into the study of mathematical and physical processes, could hold on to themselves and be made mathematical objects, so they could make axiomatic theories, like algebraic and differential equations, which are means of representing physical processes and mathematical structures, objects of mathematical investigation.

In the tradition (as in Leibniz, 1704), many philosophers had the conviction that all truth, or all truth about a particular domain, followed from a few principles. These principles were taken to be either metaphysically prior or epistemologically prior or both. In the first sense, they were taken to be entities of such a nature that what exists is caused by them. When the principles were taken as epistemologically prior, that is, as axioms, they were taken to be either epistemologically privileged, i.e., self-evident, not needing to be demonstrated, or again, inclusive or, to be such that all truths in doing serve the purposes of truths that follow from them (by deductive inferences). Gödel (1984) showed in the spirit of Hilbert, treating axiomatic theories as themselves mathematical objects that mathematics, and even a small part. Of mathematics, elementary number theory, could not be axiomatized, that, more precisely, any class of axioms that is such that we could effectively decide, of any proposition, whether or not it was in that class, would be too small to capture all of the truth.

The notion of truth occurs with remarkable frequency in our reflections on language, thought, and action. We are inclined to suppose, for example, that truth is the proper aim of scientific inquiry, that true beliefs help us to achieve our goals, tat to understand a sentence is to know which circumstances would make it true, that reliable preservation of truth as one argues from premises to a conclusion is the mark of valid reasoning, that moral pronouncements should not be regarded as objectively true, and so on. In order to assess the plausible of such theses, and in order to refine them and to explain why they hold, if they do, we expect some view of what truth be of a theory that would keep an account of its properties and its relations to other matters. Thus, there can be little prospect of understanding our most important faculties without a good theory of truth.

The ancient idea that truth is one sort of correspondence with reality has still never been articulated satisfactorily: The nature of the alleged correspondence and te alleged reality remains objectivably rid of obstructions. Yet, the familiar alternative suggests ~. That true beliefs are those that are mutually coherent, or pragmatically useful, or verifiable in suitable conditions has each been confronted with persuasive counterexamples. A twentieth-century departure from these traditional analyses is the view that truth is not a property at al ~. That the syntactic form of the predicate, . . . is true, distorts the real semantic character, with which is not to describe propositions but to endorse them. Still, this radical approach is also faced with difficulties and suggests, counter intuitively that truth cannot have the vital theoretical role in semantics, epistemology and elsewhere that we are naturally inclined to give it. Thus, truth threatens to remain one of the most enigmatic of notions, and a confirming account of it can seem essential yet, on the far side of our reach. However, recent work provides some grounds for optimism.

The belief that snow is white owes its truth to a certain feature of the external world, namely, to the fact that snow is white. Similarly, the belief that dogs bark is true because of the fact that dogs bark. This trivial observation leads to what is perhaps the most natural and popular account of truth, the correspondence theory, according to which a belief (statement, a sentence, propositions, etc. (as true just in case there exists a fact corresponding to it (Wittgenstein, 1922, Austin! 950). This thesis is unexceptionable, however, if it is to provide a rigorous, substantial and complete theory of truth ~. If it is to be more than merely a picturesque way of asserting all equivalences to the form. The belief that p is true p.

Then it must be supplemented with accounts of what facts are, and what it is for a belief to correspond to a fact, and these are the problems on which the correspondence theory of truth has floundered. For one thing, it is far from going unchallenged that any significant gain in understanding is achieved by reducing the belief that snow is white is true to the facts that snow is white exists: For these expressions look equally resistant to analysis and too close in meaning for one to provide a crystallizing account of the other. In addition, the undistributed relationship that holds in particular between the belief that snow is white and the fact that snow is white, between the belief that dogs bark and the fact that a dog barks, and so on, is very hard to identify. The best attempt to date is Wittgensteins 1922, so-called picture theory, by which an elementary proposition is a configuration of terms, with whatever stare of affairs it reported, as an atomic fact is a configuration of simple objects, an atomic fact corresponds to an elementary proposition and makes it true, when their configurations are identical and when the terms in the proposition for it to the similarly-placed objects in the fact, and the truth value of each complex proposition the truth values entail of the elementary ones. However, eve if this account is correct as far as it goes, it would need to be completed with plausible theories of logical configuration, rudimentary proposition, reference and entailment, none of which is better-off to come.

The cental characteristic of truth One that any adequate theory must explain is that when a proposition satisfies its conditions of proof or verification then it is regarded as true. To the extent that the property of corresponding with reality is mysterious, we are going to find it impossible to see what we take to verify a proposition should show the possession of that property. Therefore, a tempting alternative to the correspondence theory an alternative that eschews obscure, metaphysical concept that explains quite straightforwardly why Verifiability infers, truth is simply to identify truth with Verifiability (Peirce, 1932). This idea can take on variously formed. One version involves the further assumption that verification is holistic, . . . in that a belief is justified (i.e., verified) when it is part of an entire system of beliefs that are consistent and counter balanced (Bradley, 1914 and Hempel, 1935). This is known as the coherence theory of truth. Another version involves the assumption associated with each proposition, some specific procedure for finding out whether one should believe it or not. On this account, to say that a proposition is true is to sa that the appropriate procedure would verify (Dummett, 1979. and Putnam, 1981). While mathematics this amounts to the identification of truth with provability.

The attractions of the verificationist account of truth are that it is refreshingly clear compared with the correspondence theory, and that it succeeds in connecting truth with verification. The trouble is that the bond it postulates between these notions is implausibly strong. We do in true statement’s take verification to indicate truth, but also we recognize the possibility that a proposition may be false in spite of there being impeccable reasons to believe it, and that a proposition may be true although we are not able to discover that it is. Verifiability and ruth are no doubt highly correlated, but surely not the same thing.

A third well-known account of truth is known as pragmatism (James, 1909 and Papineau, 1987). As we have just seen, the verificationist selects a prominent property of truth and considers the essence of truth. Similarly, the pragmatist focuses on another important characteristic namely, that true belief is a good basis for action and takes this to be the very nature of truth. True assumptions are said to be, by definition, those that provoke actions with desirable results. Again, we have an account statement with a single attractive explanatory characteristic, besides, it postulates between truth and its alleged analysand in this case, utility is implausibly close. Granted, true belief tends to foster success, but it happens regularly that actions based on true beliefs lead to disaster, while false assumptions, by pure chance, produce wonderful results.

One of the few uncontroversial facts about truth is that the proposition that snow is white if and only if snow is white, the proposition that lying is wrong is true if and only if lying is wrong, and so on. Traditional theories acknowledge this fact but regard it as insufficient and, as we have seen, inflate it with some further principle of the form, ‘X’ is true if and only if ‘X’ has property ‘P’ (such as corresponding to reality, Verifiability, or being suitable as a basis for action), which is supposed to specify what truth is. Some radical alternatives to the traditional theories result from denying the need for any such further specification (Ramsey, 1927, Strawson, 1950 and Quine, 1990). For example, ne might suppose that the basic theory of truth contains nothing more that equivalences of the form, The proposition that 'p' is true if and only if 'p' (Horwich, 1990).

That is, a proposition, 'K' with the following properties, that from 'K' and any further premises of the form. Einstein's claim was the proposition that 'p' you can imply 'p'. Whatever it is, now supposes, as the deflationist says, that our understanding of the truth predicate consists in the stimulative decision to accept any instance of the schema. The proposition that 'p' is true if and only if 'p', then your problem is solved. For 'K' is the proposition, Einstein's claim is true, it will have precisely the inferential power needed. From it and Einstein's claim is the proposition that quantum mechanics are wrong, you can use Leibniz's law to imply The proposition that quantum mechanic is wrong is true; which given the relevant axiom of the deflationary theory, allows you to derive Quantum mechanics is wrong. Thus, one point in favour of the deflationary theory is that it squares with a plausible story about the function of our notion of truth, in that its axioms explain that function without the need for further analysis of what truth is.

Not all variants of deflationism have this quality virtue, according to the redundancy performatives theory of truth, the pair of sentences, The proposition that 'p' is true and plain p's, has the same meaning and expresses the same statement as one and another, so it is a syntactic illusion to think that ‘p’ is a true attribute, imputing that of any sort of property to a proposition (Ramsey, 1927 and Strawson, 1950). Yet in that case, it becomes hard to explain why we are entitled to infer The proposition that quantum mechanics are wrong is true form Einstein's claim is the proposition that quantum mechanics are wrong. Einstein's claim is true. For if truth is not property, then we can no longer account for the inference by invoking the law that if 'X', appears identical with 'Y' then any property of 'X' is a property of 'Y', and vice versa. Thus the redundancy/performatives theory, by identifying rather than merely correlating the contents of The proposition that p is true and p, precludes the prospect of a good explanation of one on truth most significant and useful characteristics. So, putting restrictions on our assembling claim to the weak is better, of its equivalence schema: The proposition that p is true is and is only p.

Support for deflationism depends upon the possibleness of showing that its axiom instances of the equivalence schema unsupplements by any further analysis, will suffice to explain all the central facts about truth, for example, that the verification of a proposition indicates its truth, and that true beliefs have a practical value. The first of these facts follows trivially from the deflationary axioms, for given ours a prior knowledge of the equivalence of ‘p’ and the proposition that ‘p’ is true, any reason to believe that ‘p’ becomes an equally good reason to believe that the preposition that ‘p’ is true. We can also explain the second fact in terms of the deflationary axioms, but not quite so easily. Consider, to begin with, beliefs of the form that if I perform the act ‘A’, then my desires will be fulfilled. Notice that the psychological role of such a belief is, roughly, to cause the performance of ‘A’. In other words, given that I do have belief, then typically.

` I will perform the act A

Notice also that when the belief is true then, given the deflationary axioms, the performance of A will in fact lead to the fulfilment of ones desires, i.e., If being true, then if I perform ‘A’, and my desires will be fulfilled.

Therefore, if it is true, then my desires will be fulfilled. So valuing the truth of beliefs of that form is quite treasonable. Nevertheless, inference has derived such beliefs from other beliefs and can be expected to be true if those other beliefs are true. So assigning a value to the truth of any belief that might be used in such an inference is reasonable.

To the extent that such deflationary accounts can be given of all the acts involving truth, then the explanatory demands on a theory of truth will be met by the collection of all statements like, The proposition that snow is white is true if and only if snow is white, and the sense that some deep analysis of truth is needed will be undermined.

Nonetheless, there are several strongly felt objections to deflationism. One reason for dissatisfaction is that the theory has an infinite number of axioms, and therefore cannot be completely written down. It can be described, as the theory whose axioms are the propositions of the form ‘p’ if and only if it is true that p, but not explicitly formulated. This alleged defect has led some philosophers to develop theories that show, first, how the truth of any proposition derives from the referential properties of its constituents, and second, how the referential properties of primitive constituents are determinated (Tarski, 1943 and Davidson, 1969). However, assuming that all propositions including belief attributions remain controversial, law of nature and counterfactual conditionals depends for their truth values on what their constituents refer to implicate. In addition, there is no immediate prospect of a presentable, finite possibility of reference, so that it is far form clear that the infinite, list-like character of deflationism can be avoided.

Additionally, it is commonly supposed that problems about the nature of truth are intimately bound up with questions as to the accessibility and autonomy of facts in various domains: Questions about whether the facts can be known, and whether they can exist independently of our capacity to discover them (Dummett, 1978, and Putnam, 1981). One might reason, for example, that if ‘T’ is true, means nothing more or less of one’s total property including real property and intangibles, are, in effect formidably purposed to convey, as an idea, which of the mind exists in the mind as a representation (as of something comprehended) or as a formulation, as a plan . Ideally the abstraction for which gives the notional correspondence by some conceptual forming standard, as of perfection that ‘T’ will be verified, then certain forms of scepticism, specifically, those that doubt the correctness of our methods of verification, that will be precluded, and that the facts will have been revealed as dependent on human practices. Alternatively, it might be said that if truth were an inexplicable, primitive, non-epistemic property, then the fact that T is true would be completely independent of us. Moreover, we could, in that case, have no reason to assume that the propositions we believe in, that in adopting its property, so scepticism would be unavoidable. In a similar vein, it might be thought that as special, and perhaps undesirable features of the deflationary approach, is that truth is deprived of such metaphysical or epistemological implications.

Upon closer scrutiny, in that, it is far from clear that there exists any account of truth with consequences regarding the accessibility or autonomy of non-semantic matters. For although an account of truth may be expected to have such implications for facts of the form T is true, it cannot be assumed without further argument that the same conclusions will apply to the fact T. For it cannot be assumed that T and T are true and is equivalent to one another given the account of true that is being employed. Of course, if truth is defined in the way that the deflationist proposes, then the equivalence holds by definition. Nevertheless, if truth is defined by reference to some metaphysical or epistemological characteristic, then the equivalence schema is thrown into doubt, pending some demonstration that the trued predicate, in the sense assumed, will be satisfied in as far as there are thought to be epistemological problems hanging over 'T's' that do not threaten 'T' is true, giving the needed demonstration will be difficult. Similarly, if truth is so defined that the fact, 'T' is felt to be more, or less, independent of human practices than the fact that 'T' is true, then again, it is unclear that the equivalence schema will hold. It would seem, therefore, that the attempt to base epistemological or metaphysical conclusions on a theory of truth must fail because in any such attempt the equivalence schema will be simultaneously relied on and undermined.

The most influential idea in the theory of meaning in the past hundred yeas is the thesis that meaning of an indicative sentence is given by its truth-conditions. On this conception, to understand a sentence is to know its truth-conditions. The conception was first clearly formulated by Frége (1848-1925), was developed in a distinctive way by the early Wittgenstein (1889-1951), and is a leading idea of Davidson (1917-). The conception has remained so central that those who offer opposing theories characteristically define their position by reference to it.

The conception of meaning as truth-conditions necessarily are not and should not be advanced as a complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts conventionally acted by the various types of a sentence in the language, and must have some idea of the significance of various kinds of speech acts. The claim of the theorist of truth-conditions should as an alternative is targeted on the notion of content: If two indicative sentences differ in what they strictly and literally say, then this difference is fully accounted for by the difference in their truth-conditions. Most basic to truth-conditions is simply of a statement that is the condition the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although this sounds as if it gives a solid anchorage for meaning, some of the security disappears when it turns out that the truth condition can only be defined by repeating the very same statement, as a truth condition of snow is white is that snow is white, the truth condition of Britain would have capitulated had Hitler invaded is the Britain would have capitulated had Hitler invaded. It is disputed whether this element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantive theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to use it in a network of inferences.

Whatever it is that makes, what would otherwise be mere sounds and inscriptions into instruments of communication and understanding. The philosophical problem is to demystify this power, and to relate it to what we know of ourselves and the world. Contributions to the study include the theory of speech acts and the investigation of communication and the relationship between words and ideas and the world and surrounding surfaces, by which some persons express by a sentence are often a function of the environment in which he or she is placed. For example, the disease I refer to by a term like arthritis or the kind of tree I refer to as a maple will be defined by criteria of which I know next to nothing. The raises the possibility of imagining two persons in alternatively differently environmental, but in which everything appears the same to each of them, but between them they define a space of philosophical problems. They are the essential components of understanding nd any intelligible proposition that is true must be capable of being understood. Such that which is expressed by an utterance or sentence, the proposition or claim made about the world may by extension, the content of a predicated or other sub-sentential component is what it contributes to the content of sentences that contain it. The nature of content is the cental concern of the philosophy of language.

In particularly, the problems of indeterminancy of translation, inscrutability of reference, language, predication, reference, rule following, semantics, translation, and the topics referring to subordinate headings associated with logic. The loss of confidence in determinate meaning (Each is another encoding) is an element common both to postmodern uncertainties in the theory of criticism, and to the analytic tradition that follows writers such as Quine (1908-). Still it may be asked, why should we suppose that fundamental epistemic notions should be keep an account of for in behavioural terms what grounds are there for supposing that p knows p is a subjective matter in the prestigiousness of its statement between some subject statement and physical theory of physically forwarded of an objection, between nature and its mirror? The answer is that the only alternative seems to be to take knowledge of inner states as premises from which our knowledge of other things is normally implied, and without which our knowledge of other things is normally inferred, and without which knowledge would be ungrounded. However, it is not really coherent, and does not in the last analysis make sense, to suggest that human knowledge have foundations or grounds. It should be remembered that to say that truth and knowledge can only be judged by the standards of our own day is not to say that it is less meaningful nor is it more cut off from the world, which we had supposed. Conjecturing it is as just that nothing counts as justification, unless by reference to what we already accept, and that at that place is no way to get outside our beliefs and our oral communication so as to find some experiment with others than coherence. The fact is that the professional philosophers have thought it might be otherwise, since one and only they are haunted by the clouds of epistemological scepticism.

What Quine opposes as residual Platonism is not so much the hypostasising of non-physical qualities or characterized throughout a real or assumed right to demand something as one’s own or one’s due claim of that which has real and independent existence as each entity of a series requires the subjective matter to posit as entities of such a notion of correspondence with things, insofar as, those appealing for an evaluating measure of formal practices. Unfortunately, Quine, for all that it is incompatible with its basic insights, substitutes for this correspondence to physical entities, and specially to the basic entities, whatever they turn out to be, of physical science. Nevertheless, when their doctrines are purified, they converge on a single claim. That no account of knowledge can depend on the assumption of some privileged relations to reality. Their work brings out why an account of knowledge can amount only to a description of human behaviour.

What, then, is to be said of these inner states, and of the direct reports of them that have played so important a role in traditional epistemology? For a person to feel is nothing else than for him to have an ability to make a certain type of non-inferential report, to attribute feelings to infants is to acknowledge in them latent abilities of this innate kind. Non-conceptual, non-linguistic knowledge of what feelings or sensations is like is attributively to beings on the basis of potential membership of our community. Infants and the more attractive animals are credited with having feelings on the basis of that spontaneous sympathy that we extend to anything humanoid, in contrast with the mere response to stimuli attributed to photoelectric cells and to animals about which no one feels sentimentally. Supposing that moral prohibition against hurting infants is consequently wrong and the better-looking animals are; those moral prohibitions grounded in their possession of feelings. The relation of dependence is really the other way round. Similarly, we could not be mistaken in supposing that a four-year-old child has knowledge, but no one-year-old, any more than we could be mistaken in taking the word of a statute that eighteen-year-old can marry freely but seventeen-year-old cannot. (There is no more ontological ground for the distinction that may suit us to make in the former case than in the later.) Again, such a question as Are robots conscious? Calling for a decision on our part whether or not to treat robots as members of our linguistic community. All this is a piece with the insight brought into philosophy by Hegel (1770-1831), that the individual apart from his society is just another animal.

Willard van Orman Quine, the most influential American philosopher of the latter half of the 20th century, when after the wartime period in naval intelligence, punctuating the rest of his career with extensive foreign lecturing and travel. Quines earlier explications were to engage on or upon mathematical logic, and set of issuing forward, A System of Logistic (1934), Mathematical Logic (1940), and Methods of Logic (1950), whereby it was with the collection of papers from a Logical Point of View (1953) that his philosophical importance became widely recognized. Quines work dominated concern with problems of convention, meaning, and synonymy cemented by Word and Object (1960), in which the indeterminancy of radical translation first takes centre-stage. In this and many subsequent writings Quine takes a bleak view of the nature of the language with which we ascribe thoughts and beliefs to ourselves and others. These intentional idioms resist smooth incorporation into the scientific world view, and Quine responds with scepticism toward them, not quite endorsing eliminativism, but regarding them as second-rate idioms, unsuitable for describing strict and literal facts. For similar reasons he has consistently expressed suspicion of the logical and philosophical propriety of appeal to logical possibilities and possible worlds, the languages that are properly behaved and suitable for literal and true descriptions of the world and every bit as those of mathematics and science. The entities to which our best theories refer must be taken with full seriousness in our ontologies, although an empiricist. Quine thus supposes that the abstract objects of set theory are required by science, and therefore exist. In the theory of knowledge Quine associated with a holistic view of verification, conceiving of a body of knowledge in terms of a web touching experience at the periphery, but with each point connected by a network of relations to other points.

Quine is also known for the view that epistemology should be naturalized, or conducted in a scientific spirit, with the object of investigation being the relationship, in human beings, between the voice of experience and the outputs of belief. Although Quines approaches to the major problems of philosophy have been attacked as betraying undue scientism and sometimes behaviourism, the clarity of his vision and the scope of his writing made him the major focus of Anglo-American work of the past forty years in logic, semantics, and epistemology. As well as the works cited his writing’s cover The Ways of Paradox and Other Essays (1966), Ontological Relativity and Other Essays (1969), Philosophy of Logic (1970), The Roots of Reference (1974) and The Time of My Life: An Autobiography (1985).

Coherence is a major player in the theatre of knowledge. There are cogence theories of belief, truth and justification, as these are to combine themselves in the various ways to yield theories of knowledge coherence theories of belief are concerned with the content of beliefs. Consider a belief you now have, the beliefs that you are reading a page in a book, in so, that what makes that belief the belief that it is? What makes it the belief that you are reading a page in a book than the belief that you have a monster in the garden?

One answer is that the belief has a coherent place or role in a system of beliefs, perception or the having the perceptivity that has its influence on beliefs. As, you respond to sensory stimuli by believing that you are reading a page in a book than believing that you have a monster in the garden. Belief has an influence on action, or its belief is a desire to act, if belief will differentiate the differences between them, that its belief is a desire or if you were to believe that you are reading a page than if you believed in something about a monster. Sortal perceptivals hold accountably the perceptivity and action that are indeterminate to its content if its belief is the action as if stimulated by its inner and latent coherence in that of your belief, however. The same stimuli may produce various beliefs and various beliefs may produce the same action. The role that gives the belief the content it has is the role it plays within a network of relations to other beliefs, some latently causal than others that relate to the role in inference and implication. For example, I infer different things from believing that I am reading a page in a book than from any other belief, justly as I infer about other beliefs.

The information of perceptibility and the output of an action supplement the central role of the systematic relations the belief has to other belief, but the systematic relations give the belief the specific contentual representation it has. They are the fundamental source of the content of belief. That is how coherence comes in. A belief has the representational content by which it does because of the way in which it coheres within a system of beliefs (Rosenberg, 1988). We might distinguish weak coherence theories of the content of beliefs from stronger coherence theories. Weak coherence theories affirm that coherence is one determinant of the representation given that the contents are of belief. Strong coherence theories of the content of belief affirm that coherence is the sole determinant of the contentual representations of belief.

No comments:

Post a Comment