June 21, 2010

ORIENTING COMPOSITIONS OF THOUGHT by: richard j.kosciejew

ORIENTING COMPOSITIONS OF THOUGHT




RICHARD J.KOSCIEJEW



Until very recently it could have been that most approaches to the philosophy of science were ‘cognitive’. This includes ‘logical positivism’, as nearly all of those who wrote about the nature of science would have been in agreement that science ought to be ‘value-free’. This had been a particular emphasis on the part of the first positivist, as it would be upon twentieth-century successors. Science, so it was said, deals with ‘facts’, and facts and values and irreducibly distinct. Facts are objective, they are what we seek in our knowledge of the world. Values are subjective: They bear the mark of human interest, they are the radically individual products of feeling and desire. Fact and value cannot, therefore, be inferred from fact, fact ought not be influenced by value. There were philosophers, notably some in the Kantian tradition, who viewed the relation of the human individual to the universalist aspirations of difference rather differently. But the legacy of three centuries of largely empiricist reflection of the ‘new’ sciences ushered in by Galilee Galileo (1564-1642), the Italian scientist whose distinction belongs to the history of physics and astronomy, rather than natural philosophy.

The philosophical importance of Galileo’s science rests largely upon the following closely related achievements: (1) His stunning successful arguments against Aristotelean science, (2) his proofs that mathematics is applicable to the real world. (3) his conceptually powerful use of experiments, both actual and employed regulatively, (4) his treatment of causality, replacing appeal to hypothesized natural ends with a quest for efficient causes, and (5) his unwavering confidence in the new style of theorizing that would come to be known as mechanical explanation.

A century later, the maxim that scientific knowledge is ‘value-laded’ seems almost as entrenched as its opposite was earlier. It is supposed that between fact and value has been breached, and philosophers of science seem quite at home with the thought that science an values may be closely intertwined after all. What has happened to bring about such an apparently radial change? What are its implications for the objectivity of science, the prized characteristic that, from Plato’s time onwards, has been assumed to set off real knowledge (epistēmē) from mere opinion (doxa)? To answer these questions adequately, one would first have to know something of the reasons behind the decline of logical positivism, as, well as of the diversity of the philosophies of science that have succeeded it.

More general, the interdisciplinary field of cognitive science is burgeoning on several fronts. Contemporary philosophical reelection about the mind - which has been quite intensive - has been influenced by this empirical inquiry, to the extent that the boundary lines between them are blurred in places.

Nonetheless, the philosophy of mind at its core remains a branch of metaphysics, traditionally conceived. Philosophers continue to debate foundational issues in terms not radically different from those in vogue in previous eras. Many issues in the metaphysics of science hinge on the notion of ‘causation’. This notion is as important in science as it is in everyday thinking, and much scientific theorizing is concerned specifically to identify the ‘causes’ of various phenomena. However, there is little philosophical agreement on what it is to say that one event is the cause of some other.

Modern discussion of causation starts with the Scottish philosopher, historian, and essayist David Hume (1711-76),who argued that causation is simply a matter for which he denies that we have innate ideas, that the causal relation is observably anything other than ‘constant conjunction’ wherefore, that there are observable necessary connections anywhere, and that there is either an empirical or demonstrative proof for the assumptions that the future will resemble the past, and that every event has a cause. That is to say, that there is an irresolvable dispute between advocates of free-will and determinism, that extreme scepticism is coherent and that we can find the experiential source of our ideas of self-substance or God.

According to Hume (1978), on event causes another if only if events of the type to which the first event belongs regularly occur in conjunction with events of the type to which the second event belongs. The formulation, however, leaves a number of questions open. Firstly, there is a problem of distinguishing genuine ‘causal law’ from ‘accidental regularities’. Not all regularities are sufficient lawlike to underpin causal relationships. Being a screw in my desk could well be constantly conjoined with being made of copper, without its being true that these screws’ are made of copper because they are in my desk. Secondly, the idea of constant conjunction does not give a ‘direction’ to causation. Causes need to be distinguished from effects. But knowing that A-type events are constantly conjoined with B-type events does not tell us which of ‘A’ and ‘B’ is the cause and which the effect, since constant conjunction is itself a symmetric relation. Thirdly, there is a problem about ‘probabilistic causation’. When we say that causes and effects are constantly conjoined, do we mean that the effects are always found with the causes, or is it enough that the causes make the effect probable?

Many philosophers of science during the past century have preferred to talk about ‘explanation’ than causation. According to the covering-law model of explanation, something is explained if it can be deduced from premises which include one or more laws. As applied to the explanation of particular events this implies that one particular event can be explained it if is linked by a law to some other particular event. However, while they are often treated as separate theories, the covering-law account of explanation is at bottom little more than a variant of Hume’s constant conjunction account of causation. This affinity shows up in the fact at the covering-law account faces essentially the same difficulties as Hume: (1) In appealing to deduction from ‘laws’, it needs to explain the difference between genuine laws and accidentally true regularities: (2) It omits by effects, as swell as effects by causes, after all, it is as easy to deduce the height of flagpole from the length of its shadow and the law of optics: (3) Are the laws invoked in explanation required to be exceptionalness and deterministic, or is it acceptable, say, to appeal to the merely probabilistic fact that smoking makes cancer more likely, in explaining why some particular person develops cancer?

Nevertheless, one of the centrally obtainable achievements for which the philosophy of science is to provide explicit and systematic accounts of the theories and explanatory strategies exploited in the science. Another common goal is to construct philosophically illuminating analyses or explanations of central theoretical concepts invoked in one or another science. In the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and there has been a great deal of work on the structure of evolutionary theory and on such crucial concepts as fitness and biological function. By introducing ‘teleological considerations’, this account views beliefs as states with biological purpose and analyses their truth conditions specifically as those conditions that they are biologically supposed to covary with.

A teleological theory of representation needs to be supplemental with a philosophical account of biological representation generally a selectionism account of biological purpose, according to which item ‘F’ has purpose ‘G’ if and only if it is now present as a result of past selection by some process which favoured item with ‘G’. So, a given belief type will have the purpose of covarying with ‘P’, say. If and only if some mechanism has selected it because it has covaried with ‘P’ the past.

Along the same lines, teleological theory holds that ‘r’ represents ‘χ’ if it is r’s function to indicate (i.e., covary with) ‘χ’, teleological theories differ depending on the theory of functions they import. Perhaps the most important distinction is that between historical theories of functions and a-historical theories. Historical theories individuate functional states (hence, contents) in a way that is sensitive to the historical development of the state, i.e., to factors such as the way the state was ‘learned’, or the way it evolved. An historical theory might hold that the function of ‘r’ is to indicate ‘χ’ only if the capacity to token ‘r’ was developed (selected, learned) because it indicates ‘χ’. thus, a state physically indistinguishable from ‘r’ (physical states being a-historical) but lacking r’s historical origins would not represent ‘χ’ according to historical theories.

The American philosopher of mind (1935-) Fodor, is known for a resolute ‘realism’ about the nature of mental functioning, taking the analogy between thought and computation seriously. Fodor believes that mental representations should be conceived as individual states with their own identities and structures, like formulae transformed by processes of computation or thought. His views are frequently contrasted with those of ‘holists’, such as the American philosopher Herbert Donald Davidson (1917-2003), or ‘instrumentalists about mental ascription, such as the British philosopher of logic and language, Eardley Anthony Michael Dummett (1925-). In recent years he has become a vocal critic of some of the aspirations of cognitive science.

Nonetheless, a suggestion extrapolating the solution of teleology is continually queried by points as owing to ‘causation’ and ‘content’, and ultimately a fundamental appreciation is to be considered, is that: We suppose that there’s a causal path from A’s to ‘A’s’ and a causal path from B’s to ‘A’s’, and our problem is to find some difference between B-caused ‘A’s’ and A-caused ‘A’s’ in virtue of which the former but not the latter misrepresent. Perhaps, the two paths differ in their counter-factual properties. In particular, though A’s and B’s both cause ‘A’s’ as a matter of fact, perhaps can assume that only A’s would cause ‘A’s’ in - as one can say - ,‘optimal circumstances’. We could then hold that a symbol expresses its ‘optimal property’, viz., the property that would causally control its tokening in optimal circumstances. Correspondingly, when the tokening of a symbol is causally controlled by properties other than its optimal property, the tokens that eventuate are ipso facto wild.

Suppose at the present time, that this story about ‘optimal circumstances’ is proposed as part of a naturalized semantics for mental representations. In which case it is, of course, essential that it be possible to say that the optimal circumstances for tokening a mental representation are in terms that are not themselves either semantical nor intentional. (It would not do, for example, to identify the optimal circumstances for tokening a symbol as those in which the tokens are true, that would be to assume precisely the sort of semantical notions that the theory is supposed to naturalize.) Befittingly, the suggestion - to put it in a nutshell - is that appeals to ‘optimality’ should be buttressed by appeals to ‘teleology’: Optimal circumstances are the ones in which the mechanisms that mediate symbol tokening are functioning ‘as they are supposed to’. In the case of mental representations, these would be paradigmatically circumstances where the mechanisms of belief fixation are functioning as they are supposed to.

So, then: The teleology o the cognitive mechanisms determines the optimal condition for belief fixation, and the optimal condition for belief fixation determines the content of beliefs. So the story goes.

To put this objection in slightly other words: Th e teleology story perhaps strikes one as plausible in that it understands one normative notion - truth - in terms of another normative notion - optimality. But this appearance e of fit is spurious there is no guarantee that the kind of optimality that teleology reconstructs has much to do with the kind of optimality that the explication of ‘truth’ requires. When mechanisms of repression are working ‘optimally’ - when they’re working ‘as they’re supposed to’ - what they deliver are likely to be ‘falsehoods’.

Or again: There’s no obvious reason why coitions that are optimal for the tokening of one sort of mental symbol need be optimal for the tokening of other sorts. Perhaps the optimal condition for fixing beliefs about very large objects (you do best from the middle distance) are different from the optimal conditions for fixing beliefs about very small ones (you do best with your eyes closed) are different from the optimal conditions for fixing beliefs sights (you do best with your eyes open). But this raises the possibility that if we’re to say which conditions are optimal for the fixation of a belief, we’ll have to know what the content of the belief is - what it’s a belief about. Our explication of content would then require a notion of optimality whose explication in turn requires a notion of content, and the resulting pile would clearly be unstable.

Teleological theories hold that ‘r’ represents ‘χ’ if it is r’s function to indicate (i.e., covary with) ‘χ’. teleological theories differ depending on the theory of functions they import. Perhaps the most important distinction is that between historical theories of functions: Historically, theories individuate functional states (hence, contents) in a way that is sensitive to the historical development of the state, i.e., to factors such as the way the state was ‘learned’, or the way it evolved. An historical theory might hold that the function of ‘r’ is to indicates ’χ’ only if the capacity to token ‘r’ was developed (selected, learned) because it indicates ‘χ’. Thus, a state physically indistinguishable from ‘r’ (physical states being a-historical), but lacking r’s historical origins would not represent ‘x’ according to historical theories.

Just as functional role theories hold that r’s representing ‘x’ is grounded in the functional role ‘r’ has in the representing system, i.e., on the relations imposed by specified cognitive processes between ‘r’ and other representations in the system’s repertoire. Functional role theories take their cue from such common-sense ideas as that people cannot believe that cats are furry if they do not know that cats are animals or that fur is like hair.

That being said, that nowhere is the new period of collaboration between philosophy and other disciplines more evident than in the new subject of cognitive science. Cognitive science from its very beginning has been ‘interdisciplinary’ in character, and is in effect the joint property of psychology, linguistics, philosophy, computer science and anthropology. There is, therefore, a great variety of different research projects within cognitive science, but the central area of cognitive science, its hardcoded ideology rests on the assumption that the mind is best viewed as analogous to a digital computer. The basic idea behind cognitive science is that recent developments in computer science and artificial intelligence have enormous importance for our conception of human beings. The basic inspiration for cognitive science went something like this: Human beings do information processing. Computers are designed precisely do information processing. Therefore, one way to study human cognition - perhaps the best way to study it - is to study. It as a matter of computational information processing. Some cognitive scientists think that the computer is just a metaphor for the human mind: Others think that the mind is literally a computer program. But it is fair to say, that without the computational model there would not have been a cognitive science as we now understand it.

The, Essay Concerning Human Understanding is the first modern systematic presentation of an empiricist epistemology, and as such had important implications for the natural sciences and for philosophy of science generally. Like his predecessor, Descartes, the English philosopher and John Locke began his account of knowledge from the conscious mind aware of ideas. Unlike Descartes, however, he was concerned not to build a system based on certainty, but to identify the mind’s scope and limits. The premise upon which Locke built his account, including his account of the natural sciences, is that the ideas which furnish the mind are all derived from experience. He thus, totally rejected any kind of innate knowledge. In this he consciously opposing Descartes, who had argued that it is possible to come to knowledge of fundamental truths about the natural world through reason alone. Descartes (1596-1650) had argued, that we can come to know the essential nature of both ‘mind’ and ‘matter’ by pure reason. John Locke accepted Descartes’s criterion of clear and distinct ideas as the basis for knowledge, but denied any source for them other than experience. It was information that came in via the five senses (ideas of sensation) and ideas engendered from pure inner experiences (ideas of reflection) came the building blocks of the understanding.

Locke combined his commitment to ‘the new way of ideas’ with a te native espousal of the ‘corpuscular philosophy’ of the Irish scientist (1627-92) Robert Boyle. This, in essence, was an acceptance of a revised, more sophisticated account of matter and its properties that had been advocated by the ancient atomists and recently supported by Galileo (1564-1642) and Pierre Gassendi (1592-1655). Boyle argued from theory and experiment that there were powerful reasons to justify some kind of corpuscular account of matter and its properties. He called the latter qualities, which he distinguished as primary and secondary - the distinction between primary and secondary qualities may be reached by two rather different routes: Either from the nature or essence of matter or from the nature and essence of experience, though practising

e these have tended to run together. The former considerations make the distinction seem like an a priori, or necessary, truth about the nature of matter, while the latter make it appear to be an empirical hypothesis -. Locke, too, accepted this account, arguing that the ideas we have of the primary qualities of bodies resemble those qualities as they are in the subject, whereas the ideas of the secondary qualities, such as colour, taste, and smell, do not resemble their causes in the object.

There is no strong connection between acceptance of the primary-secondary quality distinction and Locke’s empiricism and Descartes had also argued strongly for universal acceptance by natural philosophers, and Locke embraced it within his more comprehensive empirical philosophy. But Locke’ empiricism did have major implications for the natural sciences, as he well realized. His account begins with an analysis of experience. all ideas, he argues, are either simple or complex. Simple ideas are those like the red of a particular rose or the roundness of a snowball. Complex ideas, our ideas of the rose or the snowball, are combinations of simple ideas. We may create new complex ideas in our imagination - a dragon, for example. But simple ideas can never be created by us: We just have them or not, and characteristically they are caused, for example, the impact on our senses of rays of light or vibrations of sound in the air coming from a particular physical object. Since we cannot create simple ideas, and they are determined by our experience. Our knowledge is in a very strict uncompromising way limited. Besides, our experiences are always of the particular, never of the general. It is this particular simple idea or that particular complex idea that we apprehend. We never in that sense apprehend a universal truth about the natural world, but only particular instances. It follows from this that all claims to generality about that world - for example, all claims to identity what were then beginning to be called laws of nature - must to that extent go beyond our experience and thus be less than certain.

The Scottish philosopher, historian, and essayist, (1711-76) David Hume, whose famous discussion appears in both his major philosophical works, the ‘Treatise’ (1739) and the ‘Enquiry’(1777). The distinction is couched in term of the concept of causality, so that where we are accustomed to talk of laws, Hume contends, involves three foot ideas:

1. That there should be a regular concomitance between events of the type of the cause and those of the type of the effect.

2. That the cause event should be contiguous with the effect event.

3. That the cause event should necessitate the effect event.

The tenets (1) and (2) occasion no differently for Hume, since he believes that there are patterns of sensory impressions un-problematically related to the idea of regularity concomitance and of contiguity. But the third requirement is deeply problematic, in that the idea of necessarily that figures in it seems to have no sensory impression correlated with it. However, carefully and attentively we scrutinize a causal process, we do not seem to observe anything that might be the observed correlates of the idea of necessity. We do not observe any kind of activity, power, or necessitation. All we ever observe is one event following another, which is logically independent of it. Nor is this necessity logical, since, as, Hume observes, one can jointly assert the existence of the cause and a denial of the existence of the effect, as specified in the causal statement or the law of nature, without contradiction. What, then, are we to make of the seemingly central notion of necessity that is deeply embedded in the very idea of causation, or lawfulness? To this query Hume gives an ingenious and telling story. There is an impression corresponding to the idea of causal necessity, but it is a psychological phenomenon: Our exception that an even similar to those we have already observed to be correlated with the cause-type of event will come to be in this cas e too. Where does that impression come from? It is created as a kind of mental habit by the repeated experience of regular concomitance between events of the type of the effect and the occurring of event s of the type of the cause. And then, the impression that corresponds to the idea of regular concomitance - the law of nature then asserts nothing but the existence of the regular concomitance.

At this point in our narrative, the question at once arises as to whether this factor of life in nature, thus interpreted, corresponds to anything that we observe in nature. All philosophy is an endeavour to obtain a self-consistent understanding of things observed. Thus, its development is guided in two ways, one is demand for coherent self-consistency, and the other is the elucidation of things observed. With our direct t observations how are we to conduct such comparison? Should we turn to science? No. There is no way in which the scientific endeavour can detect the aliveness of things: Its methodology rules out the possibility of such a finding. On this point, the English mathematician and philosopher (1861-1947) Alfred Whitehead, comments: That science can find no individual enjoyment in nature, as science can find no creativity in nature, it finds mere rules of succession. These negations are true of natural science. They arse inherent in its methodology. The reason for this blindness of physical science lies in the fact that such science only deals with half the evidence provided by human experience. It divides the seamless coat - or, to change the metaphor into a happier form, it examines the coat, which is superficial, and neglects the body which is fundamental.

Whitehead claims that the methodology of science makes it blind to a fundamental aspect of reality, namely, the primacy of experience, it neglected half of the evidence. Working within Descartes’ dualistic framework of matter and mind as separate and incommensurate, science limits itself to the study of objectivised phenomena, neglecting the subject and the mental events that are his or her experience.

Both the adoption of the Cartesian paradigm and the neglect of mental events are reason enough to suspect ‘blindness’, but there is no need to rely on suspicions. This blindness is clearly evident. Scientific discoveries, impressive as they are, are fundamentally superficial. Science can express regularities observed in nature, but it cannot explain the reasons for their occurrence. Consider, for example, Newton’s law of gravity. It shows that such apparently disparate phenomena as the falling of an apple and the revolution of the earth around the sun are aspects of the same regularity - gravity. According to this law the gravitational attraction between two objects deceases in proportion to the square of the distance between them. Why is that so? Newton could not provide an answer. Simpler still, why does space have three dimensions? Why is time one-dimensional? Whitehead notes, ‘None of these laws of nature gives the slightest evidence of necessity. They are [merely] the modes of procedure which within the scale of observation do in fact prevail’.

This analysis reveals that the capacity of science to fathom the depths of reality is limited. For example, if reality is, in fact, made up of discrete units, and these units have the fundamental character in being ’throbs of experience’, then science may be in a position to discover the discreteness: But it has no access to the subjective side of nature, since e, as the Austrian physicist(1887-1961) Erin Schrödinger points out, we ‘exclude the subject of cognizance from the domain of nature that we endeavour to understand’. It follows that in order to find ‘the elucidation of things observed’ in relation to the experiential or aliveness aspect, we cannot rely on science, we need look elsewhere.

If, instead of relying on science, we rely on our immediate observation of nature and of ourselves, we find, first, that this [i.e., Descartes’] stark division between mentality and nature has no ground in our fundamental observation. We find ourselves living within nature. Secondly, in that we should conceive mental operations as among the factors which make up the constitution of nature, and thirdly, that we should reject the notion of idle wheels in the process of nature. Every factor which makes a difference, and that difference can only be expressed in terms of the individual character of that factor.

Whitehead proceeds to analyse our experiences in general, and our observations of nature in particular, and ends up with ‘mutual immanence’ as a central theme. This mutual immanence is obvious in the case of an experience, that, I am a part of the universe, and, since I experience the universe, the experienced universe is part of me. Whitehead gives an example” ‘I am in the room, and the room is an item in my present experience. But my present experience is what I am now’. A generalization of this relationship to the case of any actual occasions yields the conclusion that ‘the world is included within the occasion in one sense, and the occasion is included in the world in another sense’. The idea that each actual occasion appropriates its universe follows naturally from such considerations.

The description of an actual entity as being a distinct unit is, therefore, only one part of the story. The other, complementary part is this: The very nature of each and every actual entity is one of interdependence with all the other actual entities in the universe. Each and every actual entity is a process of prehending or appropriating all the other actual entities and creating one new entity out of them all, namely, itself.

There are two general strategies for distinguishing laws from accidentally true generalizations. The first stands by Hume’s idea that causal connections are mere constant conjunctions, and then seeks to explain why some constant conjunctions are better than others. That is, this first strategy accepts the principle that causation involves nothing more than certain events always happening together with certain others, and then seeks to explain why some such patterns - the ‘laws’ - matter more than others - the ‘accidents’ -. The second strategy, by contrast, rejects the Humean presupposition that causation involves nothing more than happen-stantial co-occurrence, and instead postulates a relationship ‘necessitation’, a kind of ‘cement, which links events that are connected by law, but not those events (like being a screw in my desk ad being made of copper) that are only accidentally conjoined.

There are a number of versions of the first Human strategy. The most successful, originally proposed by the Cambridge mathematician and philosopher F.P. Ramsey (1903-30), and later revived by the American philosopher David Lewis (1941-2002), who holds that laws are those true generalizations that can be fitted into an ideal system of knowledge. The thought is, that, the laws are those patterns that are somewhat explicated in terms of basic science, either as fundamental principles themselves, or as consequences of those principles, while accidents, although true, have no such explanation. Thus, ‘All water at standard pressure boils at 1000 C’ is a consequence of the laws governing molecular bonding: But the fact that ‘All the screws in my desk are copper’ is not part of the deductive structure of any satisfactory science. Ramsey neatly encapsulated this idea by saying that laws are ‘consequences of those proposition which we should take as axioms if we knew everything and organized it as simply as possible in a deductive system’.

Advocates of the alternative non-Humean strategy object that the difference between laws and accidents is not a ‘linguistic’ matter of deductive systematization, but rather a ‘metaphysical’ contrast between the kind of links they report. They argue that there is a link in nature between being at 1000 C and boiling, but not between being ‘in my desk’ and being ‘made of copper’, and that this is nothing to do with how the description of this link may fit into theories. According to D.M. Armstrong (1983), the most prominent defender of this view, the real difference between laws and accidentals, is simply that laws report relationships of natural ‘necessitation’, while accidents only report that two types of events happen to occur together.

Armstrong’s view may seem intuitively plausible, but it is arguable that the notion of necessitation simply restates the problem, than solving it. Armstrong says that necessitation involves something more than constant conjunction: If two events e related by necessitation, then it follows that they are constantly conjoined, but two events can be constantly conjoined without being related by necessitation, as when the constant conjunction is just a matter of accident. So necessitation is a stronger relationships than constant conjunction. However, Armstrong and other defenders of this view say ver y little about what this extra strength amounts to, except that it distinguishes laws from accidents. Armstrong’s critics argue that a satisfactory account of laws ought to cast more light than this on the nature of laws.

Hume said that the earlier of two causally related events is always the cause, and the later effect. However, there are a number of objections to using the earlier-later ‘arow of time’ to analyse the directional ‘arrow of causation’. For a start, it seems in principle, possible that some causes and effects could be simultaneous. That more, in the idea that time is directed from ‘earlier’ to ‘later’ itself stands in need of philosophical explanation - and one of the most popular explanations is that the idea of ‘movement’ from earlier to later depends on the fact that cause-effect pairs always have a time, and explain ‘earlier’ as the direction in which causes lie, and ‘later’ as the direction of effects, that we will clearly need to find some account of the direction of causation which does not itself assume the direction of time.

A number of such accounts have been proposed. David Lewis (1979) has argued that the asymmetry of causation derives from an ‘asymmetry of over-determination’. The over-determination of present events by past events - consider a person who dies after simultaneously being shot and struck by lightning - is a very rare occurrence, by contrast, the multiple ‘over-determination’ of present events by future events is absolutely normal. This is because the future, unlike the past, will always contain multiple traces of any present event. To use Lewis’s example, when the president presses the red button in the White House, the future effects do not only include the dispatch of nuclear missiles, but also the fingerprint on the button, his trembling, the further depletion of his gin bottle, the recording of the button’s click on tape, he emission of light waves bearing the image of his action through the window, the warning of the wire from the passage often signal current, and so on, and so on, and on.

Lewis relates this asymmetry of over-determination to the asymmetry of causation as follows. If we suppose the cause of a given effect to have been absent, then this implies the effect would have been absent too, since (apart from freaks like the lightning-shooting case) there will not be any other causes left to ‘fix’ the effect. By contrast, if we suppose a given effect of some cause to have been absent, this does not imply the cause would have been absent, for there are still all the other traces left to ’fix’ the causes. Lewis argues that these counterfactual considerations suffice to show why causes are different from effects.

Other philosophers appeal to a probabilistic variant of Lewis’s asymmetry. Following, the philosopher of science and probability theorist Hans Reichenbach (1891-1953), they note that the different causes of any given type of effect are normally probabilistically independent of each other, by contrast, the different effects of any given type of cause are normally probabilistically correlated. For example, both obesity and high excitement can cause heart attacks, but this does not imply that fat people are more likely to get excited than thin ones: Its fact, that both lung cancer and nicotine-stained fingers can result from smoking does imply that lung cancer is more likely among people with nicotine-stained fingers. So this account distinguishes effects from causes by the fact that the former, but not the latter, are probabilistically dependent on each other.

However, there is another course of thought in philosophy of science, the tradition of negative or eliminative induction. From the English statesman and philosopher Francis Bacon (1561-1626) and in modern time the philosopher of science Karl Raimund Popper (1902-1994), we have the idea of using logic to bring falsifying evidence to bear on hypotheses about what must universally be the case that many thinkers accept in essence his solution to the problem of demarcating proper science from its imitators, namely that the former results in genuinely falsifiable theories whereas the latter do not. Although falsely allowed many people’s objections to such ideologies as psychoanalysis and Marxism.

Hume was interested in the processes by which we acquire knowledge: The processes of perceiving and thinking, of feeling and reasoning. He recognized that much of what we claim to know derives from other people secondhand, thirdhand or worse: Moreover, our perceptions and judgements can be distorted by many factors - by w hat we are studying, as well as by the very act of study itself., the main reason, however, behind his emphasis on ‘probabilities and those other measures of evidence on which life and action entirely depend’ is this:

It is evident that all reasoning concerning ‘matter of fact’ are founded on the relation of cause and effect, and that we can never infer the existence of one object from another unless they are connected together, either mediately or immediately.

When we apparently observe a whole sequence, say of one ball hitting another, what exactly do we observe? And in the much commoner cases, when we wonder about the unobserved causes or effects of the events we observe, what precisely are we doing?

Hume recognized that a notion of ‘must’ or necessity is a peculiar feature of causal relation, inference and principles, and challenges us to explain and justify the notion. He argued that there is no observable feature of events, nothing like a physical bond, which can be properly labelled the ‘necessary connection’ between a given cause and its effect: Events simply are, they merely occur, and there is in ‘must’ or ‘ought’ about them. However, repeated experience of pairs of events sets up the habit of expectation in us, such that when one of the pair occurs we inescapably expect the other. This expectation makes us infer the unobserved cause or unobserved effect of the observed event, and we mistakenly project this mental inference on to the events themselves. There is no necessity observable in causal relations; all that can be observed is regular sequence, here is necessity in causal inferences, but only in the mind. Once we realize that causation is a relation between pairs of events. We also realize that often we are not present for the whole sequence e which we want to divide into ‘cause’ and ‘effect’. Our understanding of the casual relation is thus intimately linked with the role of the causal inference cause only causal inference e entitle us to ‘go beyond what is immediately present to the senses’. But now two very important assumptions emerge behind the causal inference: The assumption that ‘like causes, in like circumstances, will always produce like effects’, and the assumption that ‘the course of nature will continue uniformly the same’ - or, briefly that the future will resemble the past. Unfortunately, this last assumption lacks either empirical or a priori proof, that is, it can be conclusively established neither by experience nor by thought alone.

Hume frequently endorsed a standard seventeenth-century view that all our ideas are ultimately traceable, by analysis, to sensory impressions of an internal or external kind. Accordingly, he claimed that all his theses are based on ‘experience’, understood as sensory awareness together with memory, since only experience establishes matters of fact. But is our belief that the future will resemble the past properly construed as a belief concerning only a mater of fact? As the English philosopher Bertrand Russell (1872-1970) remarked, earlier this century, the real problem that Hume raises is whether future futures will resemble future pasts, in the way that past futures really did resemble past pasts. Hume declares that ‘if . . . the past may be no rule for the future, all experience become useless and can give rise to inference or conclusion. And yet, he held, the supposition cannot stem from innate ideas, since there are no innate ideas in his view nor can it stem from any abstract formal reasoning. For one thing, the future can surprise us, and no formal reasoning seems able to embrace such contingencies: For another, even animals and unthinkable people conduct their lives as if they assume the future resembles the past: Dogs return for buried bones, children avoid a painful fire, and so forth. Hume is not deploring the fact that we have to conduct our lives on the basis of probabilities, and he is not saying that inductive reasoning could or should be avoided or rejected. Rather, he accepted inductive reasoning but tried to show that whereas formal reasoning of the kind associated with mathematics cannot establish or prove matters of fact, factual or inductive reasoning lacks the ‘necessity’ and ‘certainty’ associated with mathematics. His position, therefore clear; because ‘every effect is a distinct event from its cause’, only investigation can settle whether any two particular events are causally related: Causal inferences cannot be drawn with the force of logical necessity familiar to us from a priori reasoning, but, although they lack such force, they should not be discarded. In the context of causation, inductive inferences are inescapable and invaluable. What, then, makes ‘past experience’ the standard of our future judgement? The answer is ‘custom’, it is a brute psychological fact, without which even animal life of a simple kind would be more or less impossible. ‘We are determined by custom to suppose the future conformable to the past’ (Hume, 1978), nevertheless, whenever we need to calculate likely events we must supplement and correct such custom by self-conscious reasoning.

Nonetheless, the problem that the causal theory of reference will fail once it is recognized that all representations must occur under some aspect or that the extentionality of causal relations is inadequate to capture the aspectual character of reference. The only kind of causation that could be adequate to the task of reference is intentional causal or mental causation, but the causal theory of reference cannot concede that ultimately reference is achieved by some met device, since the whole approach behind the causal theory was to try to eliminate the traditional mentalism of theories of reference and meaning in favour of objective causal relations in the world, though it is at present by far the most influential theory of reference, will prove to be a failure for these reasons.

If mental states are identical with physical states, presumably the relevant physical states are various sorts of neural states. Our concepts of mental states such as thinking, sensing, and feeling are of course, different from our concepts of neural states, of whatever sort. But that is no problem for the identity theory. As J.J.C. Smart (1962), who first argue for the identity theory, emphasized, the requisite identities do not depend on understand concepts of mental states or the meanings of mental terms. For ‘a’ to be the identical with ‘b’, ‘a’, and ‘b’ must have exactly the same properties, but the terms ‘a’ and ‘b’ need not mean the same. Its principle means by measure can be accorded within the indiscernibility of identicals, in that, if ‘A’ is identical with ‘B’, then every property that ‘A’ has ’B’, and vice versa. This is sometimes known as Leibniz’s Law.

But a problem does seem to arise about the properties of mental states. Suppose pain is identical with a certain firing of c-fibres. Although a particular pain is the very same as a neural firing, we identify that state in two different ways: As a pain and as a neural firing. The state will therefore have certain properties in virtue of which wee identify it as pain and others in virtue of which we identify it ass an excitability of neural firings. The properties in virtue of which we identify It as a pain will b e mental properties, whereas those in virtue of which ewe identify it as a neural excitability firing, will be physical properties. This has seemed to many to lead to a kind of dualism at the level of the properties of mental states, even if we reject dualism of substances and take people simply to be physical organisms, those organisms still have both mental and physical states. Similarly, even if we identify those mental states with certain physical states, those states will, nonetheless have both mental and physical properties. So disallowing dualism with respect to substances and their states simply es to its reappearance at the level of the properties of those states.

Something such as a thought or the visionary possibility that, perhaps, the effectiveness or actually of what exist by the element or complex of elements in an individual velleity, which feels, perceives, thinks, wills and especially reasons as a product of mental activity has upon itself the intelligence, intellect, consciousness, mental mentality, faculty, function or power in an ‘idea’. Additionally, and, least of mention, a bethinking inclination of the awareness on knowing its mindful human history is in essence a history of ideas, as thoughts are distinctly intellectual and stress contemplation and reasoning. Justly as language is the dress of thought. Ideas began with Plato, as eternal, mind-independent forms or archetypes of the things in the material world. Neoplatonism made them thoughts in the mind of God who created the world. The much criticized ‘new way of ideas’, so much a part of seventeenth and eighteenth-century philosophy, began with Descartes’ (1596-1650) a conscionable extension of ideas to cover whatever is in human minds too, an extension, of which Locke (1632-1704) made much use. But are they like mental images, of things outside the mind, or non-representational, like sensations? If representational, are they mental objects, standing between the mind and what they represent, or are they mental acts and modifications of a mind perceiving the world directly? Finally, are they neither objects nor mental acts, but dispositions? Malebranche (1632-1715) and Arnauld (1612-94), and then Leibniz (1646-1716), famously disagreed about how ‘ideas’ should be understood, and recent scholars disagree about how Arnauld, Descartes, Locke and Malebranche in fact understood them.

Although ideas give rise to many problems of interpretation, but narrative descriptions between them, they define a space of philosophical problems. Ideas are that with which we think, or in Locke’s terms, whatever the mind may be employed about in thinking. Looked at that way, they seem to be inherently transient, fleeting, and unstable private presences. Ideas tentatively give in a provisional contribution way in which objective knowledge can be affirmatively approved for what exists in the mind or a representation with which it is expressed. They are the essential components of understanding, and any intelligible proposition that is true must be capable of being understood. Plato’s theory of ‘forms’ is a launching celebration of the objective and timeless existence of ideas as concepts, and reified to the point where they make up the only real world, of separate and perfect models of which the empirical world is only a poor cousin. This doctrine, notably in the ‘Timaeus’, opened the way for the Neoplatonic notion of ideas as the thoughts of God. The concept gradually lost this other-worldly aspect, until after Descartes ideas became assimilated to whatever it is that lies in the mind of any thinking being.

Together with a general bias toward the sensory, so that what lies in the mind may be thought of as something like images, and a belief that thinking is well explained as the manipulation having no real existence but existing in a fancied imagination. It is not reason but ‘the imagination’ that is found to be responsible for our making the empirical inferences that we do. There are certain general ‘principles of the imagination’ according to which ideas naturally come and go in the mind under certain conditions. It is the task of the ‘science of human nature’ to discover such principles, but without itself going beyond experience. For example, an observed correlation between things of two kinds can be seen to produce in everyone a propensity to expect a thing to the second sort given an experience of a thing of the first sort. We get a feeling, or an ‘impression’, when the mind makes such a transition and that is what directly leads us to attribute the necessary reflation between things of the two kinds, there is no necessity in the relations between things that happen in the world, but, given our experience and the way our minds naturally work, we cannot help thinking that there is.

A similar appeal to certain ‘principles of the imagination’ is what explains our belief in a world of enduring objects. Experience alone cannot produce that belief, everything we directly perceive is ‘momentary’ and ‘fleeting’. And whatever our experience is like, no reasoning could assure us of the existence of something as autonomous of our impressions which continues to exist when they cease. The series of constantly changing sense impressions presents us with observable features which Hume calls ‘constancy ‘ and ‘coherence’, and these naturally operate on the mind in such a way as eventually to produce ‘the opinion of a continued and distinct existence. The explanation is complicated, but it is meant to appeal only to psychological mechanisms which can be discovered by ‘careful and exact experiments, and the observation of those particular effects, which have succumbantly resulted from [the mind’s] different circumstances and situations’.

We believe not only in bodies, but also in persons, or ourselves, which continue to exist through time, and this belief too can be explained only by the operation of certain ‘principles of the imagination’. We never directly perceive anything we can call ourselves: The most we can be aware of in ourselves are our constantly changing momentary perceptions, not the mind or self which has them. For Hume (1711-76), there is nothing that really binds the different perceptions together, we are led into the ‘fiction’ that they form a unity only because of the way in which the thought of such series of perceptions works upon the mind. ‘The mind is a kind of theatre, where several perceptions successively make their appearance, . . . there is properly no simplicity in it at one time, nor identity in different: Whatever natural propensity we may have to imagine that simplicity and identity. The comparison of the theatre must not mislead us. They are the successive perceptions only, that constitutes the mind.

Leibniz held, in opposition to Descartes, that adult humans can have experiences of which they are unaware: Experiences of which effect what they do, but which are not brought to self-consciousness. Yet there are creatures, such as animals and infants, which completely lack the ability to reflect of their experiences, and to become aware of them as experiences of theirs. The unity of a subject’s experience, which stems from his capacity to recognize all his experience as his, was dubbed by Kant ‘ as the transcendental unity of an apperception - Leibniz’s term for inner awareness or self-consciousness. But, in contrast with ‘perception’ or ‘outer awareness’ - though, this apprehension of unity is transcendental, than empirical, it is presupposed in experience and cannot be derived from it. Kant used the need for this unity as the basis of his attemptive scepticism about the external world. He argued that my experiences could only be united in one-self-consciousness, if, at least some of them were experiences of a law-governed world of objects in space. Outer experience is thus a necessary condition of inner awareness.

Here we seem to have a clear case of ‘introspection’, derived from the Latin ‘intro’ (within) + ‘specere’ (to look), introspection is the attention the mind gives to itself or to its own operations and occurrences. I can know there is a fat hairy spider in my bath by looking there and seeing it. But how do I know that I am seeing it rather than smelling it, or that my attitude to it is one of disgust than delight? One answer is considered as: A subsequent introspective act of ‘looking within’ and attending to the psychological state, - my seeing the spider. Introspection, therefore, is a mental occurrence, which has, as its object, some other psychological state like perceiving, desiring, willing, feeling, and so forth. In being a distinct awareness-episode it is different from more general ‘self-consciousness’ which characterizes all or some of our mental history.

The awareness generated by an introspective act can have varying degrees of complexity. It might be a simple knowledge of (mental) things’ - such as a particular perception-episode, or it might be the more complex knowledge of truths about one’s own mind. In this latter full-blown judgement form, introspection is usually the self-ascription of psychological properties and, when linguistically expressed, results in statements like ‘I am watching the spider’ or ‘I am repulsed’.

In psychology this deliberate inward look becomes a scientific method when it is ‘directed toward answering questions of theoretical importance for the advancement of our systematic knowledge of the laws and conditions of mental processes’. In philosophy, introspection (sometimes also called ‘reflection’) remains simply that notice which mind takes of its own operations and has been used to serve the following important functions:

(1) Methodological: However, the fact that though experiments are a powerful addition in philosophical investigation. The Ontological Argument, for example, asks us to try to think of the most perfect being as lacking existence and Berkeley’s Master Argument challenges us to conceive of an unseen tree, conceptual results are then drawn from our failure or success. From such experiments to work, we must not only have (or fail to have) the relevant conceptions but also know that we have (or fail to have) them - presumably by introspection.

(2) Metaphysical: A philosophy of mind needs to take cognizance of introspection. One can argue for ‘ghostly’ mental entities for ‘qualia’, for ‘sense-data’ by claiming introspective awareness of them. First-person psychological reports can have special consequences for the nature of persons and personal identity: Hume, for example, was content to reject the notion of a soul-substance because he failed to find such a thing by ‘looking within’. Moreover, some philosophers argue for the existence of additional perspectival facts - the fact of ‘what it is like’ to be the person I am or to have an experience of such-and-such-a-kind. Introspection as our access to such facts becomes important when we collectively consider the managing forms of a complete substantiation of the world.

(3) Epistemological: Surprisingly, the most important use made of introspection has been in an accounting for our knowledge of the outside world. According to a foundationalist theory of justification an empirical belief is either basic and ‘self-justifying’ or justified in relation to basic beliefs. Basic beliefs therefore, constitute the rock-bottom of all justification and knowledge. Now introspective awareness is said to have a unique epistemological status in it, we are said to achieve the best possibly epistemological position and consequently, introspective beliefs and thereby constitute the foundation of all justification.

Coherence is a major player in the theatre of knowledge. There are coherence theories of belief, truth and justification where these combine in various ways to yield theories of knowledge, coherence theories of belief are concerned with the content of beliefs. Consider a belief you now have, the belief that you are reading a page in a book. So what makes that belief the belief that it is? What makes it the belief that you are reading a page in a book than the belief that you have something other that is elsewhere of a preoccupation? The same stimuli may produce various beliefs and various beliefs may produce the same action. The role that gives the belief the content it has is the role it plays within a network of relations to other beliefs, the role in inference and implication, for example, I infer different things from believing that I am reading a page in a book than from any other belief, just as I infer that belief from different things than I refer other beliefs from.

The input of perception and the output of an action supplement the central role of the systematic relations the belief has to other beliefs, except that the systematic relations given to the belief specified of the content it has. They are the fundamental source of the content of beliefs. That is how coherence comes to be. A belief that the content that it does because of the away in which it coheres within the system of beliefs, however, weak coherence theories affirm that coherence is one determinant of the content of belief as strong coherence theories on the content of belief affirm that coherence is the sole determinant of the content of belief.

Nonetheless, the concept of the given-referential immediacy as apprehended of the contents of sense experience is expressed in the first person, and present tense reports of appearances. Apprehension of the given is seen as immediate both in a causal sense, since it lacks the usual causal chain involved in perceiving real qualities of physical objects, and in an epistemic sense, since judgements expressing it are justified independently of all other beliefs and evidence. Some proponents of the idea of the ‘given’ maintain that its apprehension is absolutely certain: Infallible, incorrigible and indubitable. It has been claimed also that a subject is omniscient with regard to the given - if a property appears, then the subject knows this.

Without some independent indication that some of the beliefs within a coherent system are true, coherence in itself is no indication of truth. Fairy stories can cohere, however, our criteria for justification must indicate to us the probable truth of our beliefs. Hence, within any system of beliefs there must be some privileged class with which others must cohere to be justified. In the case of empirical knowledge, such privileged beliefs must represent the point of contact between subject and world: They must originate within our descendable inherent perceptions of the world, that when challenged, however, we justify our ordinary perceptual beliefs about physical properties by appeal to beliefs about appearances. The latter seem more suitable as foundational, since there is no class of more certain perceptual beliefs to which we appeal for their justification.

The argument that foundations must be certain was offered by Lewis (1946). He held that no proposition can be probable unless some are certain. If the probability of all propositions or beliefs were relative to evidence expressed in others, and if these relations were linear, then any regress would apparently have to terminate in propositions or beliefs that are certain. But Lewis shows neither that such relations must be linear nor that redresses cannot terminate in beliefs that are merely probable or justified in themselves without being certain or infallible.

Arguments against the idea of the given originate with Kant (1724-1804), who argues that percepts without concepts do not yet constitute any form of knowing. Being non-epistemic, they presumably cannot serve as epistemic foundations. Once we recognize that we must apply concepts of properties to appearances and formulate beliefs utilizing those concepts before the appearances can play any epistemic role, it becomes more plausible that such beliefs are fallible. The argument was developed by Wilfrid Sellars (1963), which according to him, the idea of the given involves a confusion between sensing particulars (having sense impressions), which is non-epistemic, and having non-inferential knowledge of propositions referring to appearances. The former may be necessary for acquiring perceptual knowledge, but it is not itself a primitive kind of knowing. Its being non-epistemic renders it immune from error, but also unsuitable for epistemological foundations. The latter, non-referential perceptual knowledge, are fallible, requiring concepts acquired through trained responses to public physical objects.

Contemporary foundationalists deny the coherentist’s claim whole eschewing the claim that foundations, in the form of reports about appearances, are infallible. They seek alternatives to the given as foundations. Although arguments against infallibility are sound, other objections to the idea of foundations are not. That concepts of objective properties are learned prior to concepts of appearances, for example, implied neither that claims about appearances are less certain than claims about objective properties, nor that the latter are prior in chains of justification. That there can be no knowledge prior to the acquisition and consistent application of concepts allows for propositions whose truth requires only consistent applications of concepts, and this may be so for some claims about appearances, least of mention, coherentists would add that such genuine belief’s stands in need of justification in themselves and so cannot be foundations.

Coherentists will claim that a subject requires evidence that he applies concepts consistently that he is able, for example, consistently to distinguish red from other colours that appear. Beliefs about red appearances could not then be justified independently of other beliefs expressing that evidence. To say that part of the doctrine of the given that holds beliefs about appearances to be self-justified, we require an account of how such justification is possible, how some beliefs about appearances can be justified without appeal to evidence. Some foundationalists simply assert such warrant as derived from experience, but, unlike appeals to certainty by proponents of the given.

It is, nonetheless, an explanation of this capacity that enables its developments as an epistemological corollary to metaphysical dualism. The world of ‘matter’ is known through external/outer sense-perception. So cognitive access to ‘mind’ must be based on a parallel process of introspection which ‘thought . . . not ‘sense’, as having nothing to do with external objects: Yet [put] is a great deal like it, and might properly enough be called ‘internal sense’. However, having mind as object, is not sufficient to make a way of knowing ‘inner’ in the relevant sense be because mental facts can be grasped through sources other than introspection. To point, is rather that ‘inner perception’, provides a kind of access to the mental not obtained otherwise - it is a ‘look within from within’. Stripped of metaphor this indicates the following epistemological features:

1. Only I can introspect my mind.

2. I can introspect only my mind.

3. Introspective awareness is superior

to any other knowledge of contingent

facts that I or others might have.

The tenets of (1) and (2) are grounded in the Cartesian of ‘privacy’ of the mental. Normally, a single object can be perceptually or inferentially grasped by many subjects, just as the same subject can perceive and infer different things. The epistemic peculiarity of introspection is that, is, is exclusive - it gives knowledge only of the mental history of the subject introspecting.

The tenet (2) of the traditional theory is grounded in the Cartesian idea of ‘privileged access’. The epistemic superiority of introspection lies in its being and infallible source of knowledge. First-person psychological statements which are its typical results cannot be mistaken. This claim is sometimes supported by an ‘imaginability test’, e.g., the impossibility of imaging that I believe that I am in pain, while at the same time imaging evidence that I am not in pain. An apparent counter-example to this infallibility claim would be the introspective judgement ‘I am perceiving a dead friend’ when I am really hallucinating. This is taken to by reformulating such introspective reports as ‘I seem to be perceiving a dead friend’. The importance of such privileged access is that introspection becomes a way of knowing immune from the pitfalls of other sources of cognition. The basic asymmetry between first and third person psychological statements by introspective and non-introspective methods, but even dualists can account for introspective awareness in different ways:

(1) Non-perceptual models - Self-scrutiny need not be perceptual. My awareness of an object ‘O’ changes the status of ‘O’. It now acquires the property of ‘being an object of awareness’. On the basis of this or the fact that I am aware of ‘O’, such an ‘inferential model’ of awareness is suggested by the Bhatta Mimamsa school of Indian Epistemology. This view of introspection does not construe it as a direct awareness of mental operations but, interestingly, we will have occasion to refer to theories where the emphasis on directness itself leads to a non-perceptual, or at least, a non-observational account of introspection.

(2) Reflexive models - Epistemic access to our minds need not involve a separate attentive act. Part of the meaning of a conscious state is that I know in that state when I am in that state. Consciousness is here conceived as ‘phosphorescence’ attached to some mental occurrence and in no need of a subsequent illustration to reveal itself. Of course, if introspection is defined as a distinct act then reflexive models are really accounts of the first-person access that makes no appeal to introspection.

(3) Public-mind theories and fallibility/infallibility models - the physicalists’ denial of metaphysically private mental facts naturally suggests that ‘looking within’ is not merely like perception but is perception. For Ryle (1900-76), mental states are ‘iffy’ behavioural facts which, in principle, are equally accessible to everyone in the same throughout. One’s own self-awareness therefore is, in effect, no different in type from anyone else’s observations about one’s mind.

A more interesting move is for the physicalists to retain the truism that I grasp that I am sad in a very different way from that in which I know you to be sad. This directedness or non-inferential nature of self-knowledge can be preserved in some physicalists theories of introspection. For instance, Armstrong’s identification of mental states with causes of bodily behaviour and of the latter with brain states, makes introspection the process of acquiring information about such inner physical causes. But since introspection is itself a mental state, it is a process in the brain as well: And since its grasp of the relevant causal information is direct, it becomes a process in which the brain scans itself.

Alternatively, a broadly ‘functionalist’ inclination of what is consenting to mental states suggest of the machine-analogue of the introspective situation: A machine-table with the instruction ‘Print: ‘I am in state ‘A’ when in state ‘A’ results in the output ‘I am in state ‘A’ when state ‘A’ occurs. Similarly, if we define mental states and events functionally, we can say that introspection occurs when an occurrence of a mental state ‘M’ directly results in awareness of ‘M’. Observe with care that this way of emphasizing directness yields a non-perceptual and non-observational model of introspection. The machine in printing ‘I am in state ‘A’ does so (when it is not making a ‘verbal mistake’) just because it is in state ‘A’. There is no computation of information or process of ascertaining involved. The latter, at best, consist simply in passing through a sequence of states.

Furthering toward the legitimate question: How do I know that I am seeing a spider? Was interpreted as a demand for the faculty or information-processing-mechanism whereby I come to acquire this knowledge? Peculiarities of first-person psychological awareness and reports were carried over as peculiarities of this mechanism. However, the question need not demand the search for a method of knowing but rather for an explanation of the special epistemic features of first-person psychological statements. In that, the problem of introspection (as a way of knowing) dissolves but the problem of explaining ‘introspective’ or first-person authority remains.

Traditionally, belief has been of epistemological interest in its propositional guise: ‘S’ believes that ‘p’, where ‘p’ is a proposition towards which an agent, ‘S’, exhibits an attitude of acceptance. Not all belief is of this sort. If I trust what you say, I believe you. And someone may believe in Mrs. Dudek, or in a free-market economy, or in God. It is sometimes supposed that all beliefs are ‘reducible’ to propositional belief, belief-that. Thus, my believing you might be thought as matter of my believing, perhaps, that what you say is true, and your belief in free markets or in God, a matter of your believing that free-market economy is desirable or that God exists.

It is doubtful, however, that non-propositional believes can, in every case, be reduced in this way. Debated on this point has tended to focus on an apparent distinction between ‘belief-that’ and ‘belief-in’, and the application of this distinction to belief in God: St. Thomas Aquinas (1225-64), accepted or advanced as true or real on the basis of less than convincing evidence in supposing that to believe in God is simply to believe that certain truths hold, such that God exists, that he is benevolent, and so forth. Others ague that belief-in is a distinctive attitude, one that includes essentially an element of trust. More commonly, belief-in has been taken to involve a combination of propositional belief together with some further attitude.

H.H. Price (1969) defends the claim that there is different sorts of ‘belief-in’, some, but not all, reducible to ‘beliefs-that’. If you believe in God, you believe that God exists, that God is good, etc. But, according to Price, your belief involves, in addition, a certain complex pro-attitude toward its object. One might attempt to analyse this further attitude in terms of additional beliefs-that: ‘S’ believes in ‘χ’ exists (and perhaps holds further factual beliefs about ‘χ’) (2) ‘S’ believes that ‘χ; is good or valuable in some respect? ; and (3) ‘S’ believes that ‘χ’ is being good or valuable in this respect is it is a good thing. An analysis of this sort, however, fails adequately to capture the further affective component of belief-in. Thus, according to Price, if you believe in God, your belief is merely that certain truths hold: You possess, in addition, an attitude of commitment and trust towards God.

Notoriously, belief-in outruns the evidence for the corresponding belief-that. Does this diminish its rationality? If belief-in presupposes belief-that, it might be thought that the evidential standards for the former must be, at least, as, high as standards for the latter. And any additional pro-attitude might be thought to require further layers of justification not required for cases of belief-that.

Some philosophers have argued that, at least for cases in which belief-in is synonymous with faith (or, faith-in), evidential thresholds for constituent propositional beliefs are diminished. You may reasonably have faith in God or Mrs. Dudek, even though beliefs about their respective attributes, were you to harbour them would be evidentially standard.

Belief-in may be, in general, less susceptible to alteration in the face of unfavourable evidence than belief-that. A believer who encounters evidence against God’s existence may remain unshaken in his belief, in part because the evidence does not bear in his pro-attitude. So long as this is united with his belief that God exists, the belief may survive epistemic buffeting - and reasonably so - in a way that an ordinary propositional belief that would not.

What is at stake here is the appropriateness of distinct types of explanation. That ever since the times of Aristotle (384-322 Bc) philosophers have emphasized the importance of explanatory knowledge. In simplest terms, we want to know not only what is the case but also why it is. This consideration suggests that we define explanation as an answer to a why-question. Such a definition would, however, be too broad, because some why-questions are request for consolation (Why did my son have to die?) Or moral justification (Why should women not be paid the same as men for the same work?) It would also be too narrow because some explanations are responses to how-questions (How does radar work?) Or how-possibly-questions (How is it possible for cats always to land on four feet?)

In its overall sense, ‘to explain’ means to make clear, to make plain, or to provide understanding. Definitions of this sort used philosophically un-helped, for the terms used in the definitions are no less problematic than the term to be defined. Moreover, since a wide variety of things require explanation, and since many different types of explanation exist, a more complex explanation is required. The term ‘explanandum’ is used to refer to that which is to be explained: The term ‘explanans’ aim to that which does the explaining. The explanans and the explanandum taken together constitute the explanation.

One common type of explanation occurs when deliberate human actions are explained in terms of consciousable purposes. ‘Why did you go to the pharmacy yesterday? ‘Because I had a headache and needed to get some aspirin’. It is tacitly assumed that aspirin is an appropriate medication for headaches and that going to the pharmacy would be an efficient way of getting some. Such explanations are, of course, teleological, referring, as they do to goals. The explanans are not the realisation of a future goal - if the pharmacy happened to be closed for stocktaking the aspirin would not have been obtained there, but that would not invalidate the explanation. Some philosophers would say that the antecedent desire to achieve the end is what does the explaining: Others might say that the explaining is done by the nature of the goal and the fact that the action promoted the chances of realizing it. In any case, it should not be automatically assumed that such explanations are causal. Philosophers differ considerably on whether these explanations are to be referentially framed in terms of cause or reason.

The distinction between reason and causes is motivated in good part by a desire to separate the rational from the natural order. Many who have insisted on distinguishing reasons from causes have failed to distinguish two kinds of reason. Consider my reason for sending a letter by express mail. Asked why I did so, I might say I wanted to get it there in a day, or simply: to get it there in a day. Strictly, the reason is expressed by ‘to get it there in a day’. But what this expresses are my reasons only because I am suitably motivated, in that I am in a reason state, wanting to get the letter there in a day. - especially wants reason states, beliefs and intentional - and not reasons strictly so called, that are candidates for causes. The latter are abstract contents of propositional altitudes, as the former are psychological elements that play motivational roles.

It has also seemed to those who deny that reasons are causes that the former justifies, as well as explain the actions for which they are reasons, whereas the role of causes is at most to explain. Another claim is that the relation between reasons (and here reason states are often cited explicitly) and the action they explain is non-contingent: Whereas, the relation of causes to their effects is contingent. The ‘logical connection argument’ proceeds from this claim to the conclusion that reasons are mot causes.

All the same, the explanation as framed in terms of reason and causes, and there are many differing analyses of such concepts as intention and agency. Expanding the domain beyond consciousness. Freud maintained, in addition, that much human behaviour can be explained in terms of unconscious wishes. These Freudian explanations should probably be construed as basically causal.

Problems arise when teleological explanations are offered in other context. The behaviour of non-human animals is often explained in terms of purpose, e.g., the mouse ran to escape from the cat. In such cases the existence of conscious purpose seems dubious. The situation is still more problematic when a super-empirical purpose is invoked -, e.g., the explanation of living species in terms of God’s purpose, or the vitalistic explanation of biological phenomena in terms of an entelechy or vital principle. In recent years an ‘anthropic principle’ has received attention in cosmology. All such explanations have been condemned by many philosophers as anthropomorphic.

The preceding objection, for and all, that philosophers and scientists often maintain that functional explanations play an important and legitimate role in various sciences such as evolutionary biology, anthropology and sociology. For example, the case of the peppered moth in Liverpool, the change in colour and back again to the light phase provided adaption to a changing environment and fulfilled the function of reducing predation on the species. In the study of primitive societies anthropologists have maintained that various rituals, e.g., a rain dance, which may be inefficacious in brings about their manifest goals, e.g., producing rain. Actually fulfil the latent function of increasing social cohesion at a period of stress, e.g., theological and/or functional explanations in common sense and science often take pains to argue that such explanations can be analysed entirely in terms of efficient causes, thereby escaping the change of anthropomorphism, yet not all philosophers agree.

Mainly to avoid the incursion of unwanted theology, metaphysics, or anthropomorphism into science, many philosophers and scientists - especially during the first half of the twentieth century - held that science provides only descriptions and predictions of natural phenomena, but not explanations. Beginning in the 1930s, a series of influential philosophers of science - including Karl Pooper (1935) Carl Hempel and Paul Oppenheim (1948) and Hempel (1965) - maintained that empirical science can explain natural phenomena without appealing to metaphysics and theology. It appears that this view is now accepted by a vast majority of philosophers of science, though there is sharp disagreement on the nature of scientific explanation.

The previous approach, developed by Hempel Popper and others became virtually a ‘received view’ in the 1960s and 1970s. According to this view, to give scientific explanation of a natural phenomenon is to show how this phenomenon can be subsumed under a law of nature. A particular rupture in a water pipe can be explained by citing the universal law that water expands when it heated and the fact that the temperature of the water in the pipe dropped below the freezing point, so began the contraction of structural composites that sustain the particular metal. General laws, as well as particular facts, can be explained by subsumption. The law of conservation of linear momentum can be explained by derivation from Newton’s second and third laws of motion. Each of these explanations is a deductive argument: The premisses constitute the explanans and the conclusion is the explanandum. The explanans contain one or more statements of universal laws and, in many cases, statements describing initial conditions. This pattern of explanation is known as the ‘deductive-nomological model’ any such argument shows that the explanandum had to occur given the explanans.

Moreover, in contrast to the foregoing views - which stress such factors as logical relations, laws of nature and causality - a number of philosophers have argued that explanation, and not just scientific explanation, can be analysed entirely in pragmatic terms.

During the past half-century much philosophical attention has been focussed on explanation in science and in history. Considerable controversy has surrounded the question of whether historical explanation must be scientific, or whether history requires explanations of different types. Many diverse views have been articulated: the foregoing brief survey does not exhaust the variety.

In everyday life we encounter many types of explanation, which appear not to raise philosophical difficulties, in addition to those already of mention. Prior to take-off a flight attendant explains how to use the safety equipment on the aeroplane. In a museum the guide explains the significance of a famous painting. A mathematics teacher explains a geometrical proof to be a bewildered student. A newspaper story explains how a prisoner escaped. Additional examples come easily to mind. The main point is to remember the great variety of context in which explanations are sought and given.

Another item of importance to epistemology is the widely held notion that non-demonstrative inference can be characterized as the inference to the best explanation. Given the variety of views on the nature of explanation, this popular slogan can hardly provide a useful philosophical analysis.

The inference to the best explanation is claimed by many to be a legitimate form of non-deductive reasoning, which provides an important alternative to both deduction and enumerative induction. Some would claim it is only through reasoning to the best explanation that one can justify beliefs about the external world, the past, theoretical entities in science, and even the future. Consider belief about the external world and assume that we know what we do about our subjective and fleeting sensations. It seems obvious that we cannot deduce any truths about the existence of physical objects from truths describing the character of our sensations. But neither can we observe a correlation between sensations and something other than sensations since by hypothesis all we have to rely on ultimately is knowledge of our sensations. Nonetheless, we may be able to posit physical objects as the best explanation for the character and order of our sensations. In the same way, various hypotheses about the past might best explain present memory: Theatrical postulates in physics might best explain phenomena in the macro-world, and it is possible that our access to the future is through past observations. But what exactly is the form of an inference to the best explanation?

When one presents such an inference in ordinary discourse it often seems to have as of:

1. ‘O’ is the case

2. If ‘E’ had been the case ‘O’ is what we would expect

Therefore there is a high probability that:

3. ‘E’ was the case.

This is the argument form that Peirce (1839-1914) called ‘hypophysis’ or ‘abduction’. To consider a very simple example, we might upon coming across some footsteps on the beach, reason to the conclusion that a person walking along the beach recently by noting that if a person had walked along the beach one would expect to find just such footsteps.

But is abduction a legitimate form of reasoning? Obviously, if the conditional in (2) above is read as a material conditional such arguments would be hopelessly based. Since the proposition that ‘E’ materially implies ‘O’ is entailed by ‘O’, there would always be an infinite number of competing inferences to the best explanation and none of them would seem to lend support to its conclusion. The conditionals we employ in ordinary discourse, however, are seldom, if ever, material conditionals. Such that the vast majority of ‘if . . . Then . . . ‘ statements do not seem to be truth-functionally complex. Rather, they seem to assert a connection of some sort between the states of affairs referred to in the antecedent (after the ‘if’) and in the consequent (after the ‘then’). Perhaps the argument form has more plausibility if the conditional is read in this more natural way. But consider an alternative footsteps explanation:

1. There are footprints on the beach

2. If cows wearing boots had walked along the beach recently one would expect to find such footprints

Therefore. There is a high probability that:

3. Cows wearing boots walked along the beach recently.

This inference has precisely the same form as the earlier inference to the conclusion that people walked along the beach recently and its premisses are just as true, but we would be no doubt considered of both the conclusion and the inference as simply silly. If we are to distinguish between legitimate and illegitimate reasoning to the best explanation, it would seem that we need a more sophisticated model of the argument form. It would seem that in reasoning to an explanation we need criteria for choosing between alternative explanations. If reasoning to the best explanation is to constitute a genuine alternative to inductive reasoning. It is important that these criteria not be implicit premisses which will convert our argument into an inductive argument. Thus, for example, if the reason we conclude that people rather than cow walked along the beach is only that we are implicitly relying on the premiss that footprints of this sort are usually produced by people. Then it is certainly tempting to suppose that our inference to the best explanation was really a disguised inductive inference of the form:

1. Most footprints are produced by people.

2. Here are footprints

Therefore in all probability,

3. These footprints were produced by people.

If we follow the suggestion made above, we might construe the form of reasoning to the best explanation, such that:

1. ‘O’ (a description of some phenomenon).

2. Of the set of available and competing explanations

E1, E2 . . . , En capable of explaining ‘O’, E1 is the best according to the correct criteria for choosing among potential explanations.

Therefore in all probability:

3. E1.

Here too, is a crucial ambiguity in the concept of the best explanation. It might be true of an explanation E1 that it has the best chance of being correct without it being probable that E1 is correct. If I have two tickets in the lottery and one hundred, other people each have one ticket, I am the person who has the best chance of winning, but it would be completely irrational to conclude on that basis that I am likely too win. It is much more likely that one of the other people will win than I will win. To conclude that a given explanation is actually likely to be correct on must hold that it is more likely that it is true than that the distinction of all other possible explanations is correct. And since on many models of explanation the number of potential explanations satisfying the formal requirements of adequate explanation is unlimited. This will be a normal feat.

Explanations are also sometimes taken to be more plausible the more explanatory ‘power’ that they have. This power is usually defined in terms of the number of things or more likely, the number of kinds of things, the theory can explain. Thus, Newtonian mechanics were so attractive, the argument goes, partly because of the range of phenomena the theory could explain.

The familiarity of an explanation in terms of explanations is also sometimes cited as a reason for preferring that explanation to fewer familiar kinds of explanation. So if one provides a kind of evolutionary explanation for the disappearance of one organ in a creature, one should look more favourably on a similar sort of explanation for the disappearance of another organ.

Evaluating the claim that inference to the best explanation constitutes a legitimate and independent argument form. One must explore the question of whether it is a contingent fact that, at least, most phenomena have explanations and that explanations that satisfy a given criterion, simplicities, for example, are more likely to be correct. While it might be nice if the universe were structured in such a way that simple, powerful, familiar explanations were usually the correct explanation, it is difficult to avoid the conclusion that if this is true it would be an empirical fact about our universe discovered only a posteriori. If the reasoning to the explanation relies on such criteria, it seems that one cannot without circularity use reasoning to the best explanation to discover that the reliance on such criteria is safe. But if one has some independent way of discovering that simple, powerful, familiar explanations are more often correct, then why should we think that reasoning to the best explanation is an independent source of information about the world? Again, why should we not conclude that it would be more perspicuous to represent the reasoning this way:

1. Most phenomena have the simplest, most powerful,

familiar explanations available

2. Here is an observed phenomenon, and E1 is the simplest,

most powerful, familiar explanation available.

Therefore, in all probability:

3. This is to be explained by E1.

But the above is simply an instance of familiar inductive reasoning.

There are various ways of classifying mental activities and states. One useful distinction is that between the propositional attitudes and everything else. A propositional attitude in one whose description takes a sentence as complement of the verb. Belief is a propositional attitude: One believes (truly or falsely as the case may be), that there are cookies in the jar. That there are cookies in the jar is the proposition expressed by the sentence following the verb. Knowing, judging, inferring, concluding and doubts are also propositional attitudes: One knows, judges, infers, concludes, or doubts that a certain proposition (the one expressed by the sentential complement) is true.

Though the propositions are not always explicit, hope, fear, expectation, intention, and a great many others terms are also (usually) taken to describe propositional attitudes, one hopes that (is afraid that, etc.) there are cookies in the jar. Wanting a cookie is, or can be construed as, a propositional attitude: Wanting that one has (or eat or whatever) a cookie, intending to eat a cookie is intending that one will eat a cookie.

Propositional attitudes involve the possession and use of concepts and are, in this sense, representational. One must have some knowledge or understanding of what χ’s are in order to think, believe or hope that something is ‘χ’. In order to want a cookie, intend to eat one must, in some way, know or understand what a cookie is. One must have this concept. There is a sense in which one can want to eat a cookie without knowing what a cookie is - if, for example, one mistakenly thinks there are muffins in the jar and, as a result wants to eat what is in the jar ( = cookies). But this sense is hardly relevant, for in this sense one can want to eat the cookies in the jar without wanting to eat any cookies. For this reason(and this sense) the propositional attitudes are cognitive: They require or presuppose a level of understanding and knowledge, this kind of understanding and knowledge required to possess the concepts involved in occupying the propositional state.

Though there is sometimes disagreement about their proper analysis, non-propositional mental states, yet do not, at least on the surface, take propositions as their object. Being in pain, being thirsty, smelling the flowers and feeling sad are introspectively prominent mental states that do not, like the propositional attitudes, require the application or use of concepts. One doesn’t have to understand what pain or thirst is to experience pain or thirst. Assuming that pain and thirst are conscious phenomena, one must, of course, be conscious or aware of the pain or thirst to experience them, but awareness of must be carefully distinguished from awareness that. One can be aware of ‘χ’, - thirst or a toothache - without being aware that, that, e.g., thirst or a toothache, is that like beliefs that and knowledge that, are a propositional attitude, awareness of is not.

As the examples, pain, thirst, tickles, itches, hungers are meant to suggest, the non-propositional states have a felt or experiential [phenomenal] quality to them that is absent in the case of the propositional attitudes. Aside from whom it is we believe to be playing the tuba, believing that John is playing the tuba is much the same as believing that Joan is playing the tuba. These are different propositional states, different beliefs, yet, they are distinguished entirely in terms of their propositional content - in terms of what they are beliefs about. Contrast this with the difference between hearing John play the tuba and seeing him play the tuba. Hearing John play the tuba and seeing John play the tubas differ, not just (as do beliefs) in what they are of or about (for these experiences are, in fact, of the same thing: John playing the tuba), but in their qualitative character, the one involves a visual, the other and auditory, experience. The difference between seeing John play the tuba and hearing John play the tuba, is then a sensory not a cognitive deviation.

Some mental states are a combination of sensory and cognitive elements, e.g., as fears and terror, sadness and anger, feeling joy and depression, are ordinarily thought of in this way sensations are: Not in terms of what propositions (if any) they represent, but (like visual and auditory experience) in their intrinsic character, as they are felt to the someone experiencing them. But when we describe a person for being afraid that, sad that, upset that (as opposed too merely thinking or knowing that) so-and-so happened, we typically mean to be describing the kind of sensory (feeling or emotional) quality accompanying the cognitive state. Being afraid that the dog is going to bite me is both to think (that he might bite me) - a cognitive state - and feel fear or apprehension (sensory) at the prospect.

The perceptual verbs exhibit this kind of mixture, this duality between the sensory and the cognitive. Verbs like to hear, to say, and to feel, are [often] used to describe propositional (cognitive) states, but they describe these states in terms of the way (sensory) one comes to be in them. Seeing that there are two cookies left by seeing. Feeling that there are two cookies left is coming to know this in a different way, by having tactile experiences (sensations).

On this model of the sensory-cognitive distinction (at least it is realized in perceptual phenomena). Sensations are a pre-conceptual, a pre-cognitive, vehicle of sensory information. The terms ‘sensation’ and ‘sense-data’ (or simply experience) were (and, in some circles, still are) used to describe this early phase of perceptual processing. It is currently more fashionable to speak of this sensory component in perception as the percept, the sensory information store, is generally the same: An acknowledgement of a stage in perceptual processing in which the incoming information is embodied in ‘raw’ sensory (pre-categorical, pre-recognitional) forms. This early phase of the process is comparatively modular - relatively immune to, and insulated from, cognitive influence. The emergence of a propositional [cognitive] states - seeing that an object is red - depends, then, on the earlier occurrence of a conscious, but nonetheless, non-propositional condition, seeing (under the right condition, of course) the red object. The sensory phase of this process constitutes the delivery of information (about the red object) in a particular form (visual): Cognitive mechanisms are then responsible for extracting and using this information - for generating the belief (knowledge) that the object is red. (The belief of blindness suggests that this information can be delivered, perhaps in degraded form, at a non-conscious level.)

To speak of sensations of red objects, the tuba and so forth, is to say that these sensations carry information about an object’s colour, its shape, orientation, and position and (in the case of an audition) information about acoustic qualities such as pitch, timbre, volume. It is not to say that the sensations share the properties of the objects they are sensations of or that they have the properties they carry information about. Auditory sensations are not loud and visual sensations are not coloured. Sensations are bearers of nonconceptualized information, and the bearer of the information that something is red need not itself be red. It need not even be the sort of thing that could be red: It might be a certain pattern of neuronal events in the brain. Nonetheless, the sensation, though not itself red, will (being the normal bearer of the information) typically produce in the subject who undergoes the experience a belief, or tendency to believe, that something red is being experienced. Hence the existence of hallucinations.

Just as there are theories of the mind, which would deny the existence of any state of mind whose essence was purely qualitative (i.e., did not consist of the state’s extrinsic, causal, properties) there are theories of perception and knowledge - cognitive theories - that denies a sensory component to ordinary sense perception. The sensor y dimension (the look, feel, smells, taste of things) is (if it is not altogether denied) identified with some cognitive condition (knowledge or belief) of the experienced. All seeing (not to mention hearing, smelling and feeling) becomes a form of believing or knowing. As a result, organisms that cannot know cannot have experiences. Often, to avoid these striking counterintuitive results, implicit or otherwise unobtrusive (and, typically, undetectable) forms of believing or, knowing.

Aside, though, from introspective evidence (closing and opening one’s eyes, if it changes beliefs at all, doesn’t just change beliefs, it eliminates and restores a distinctive kind of conscionable experience), there is a variety of empirical evidence for the existence of a stage in perceptual processing that is conscious without being cognitive (in any recognizable sense). For example, experiments with brief visual displays reveal that when subjects are exposed for very brief (50 msec.) Intervals to information-rich stimuli, there is persistence (at the conscious level) of what is called an image or visual icon that embodies more information about the stimulus than the subject can cognitively process or report on. Subjects cab exploits the information in this persisting icon by reporting on any part of the absent array of numbers (the y can, for instance, reports of the top three numbers, the middle three or the bottom three). They cannot, however, identify all nine numbers. The y report seeing all nine, and the y can identify any one of the nine, but they cannot identify all nine. Knowledge and brief, recognition and identification - these cognitive states, though present for any two or three numbers in the array, are absent for all nine numbers in the array. Yet, the image carries information about all nine numbers (how else accounts for subjects’ ability to identify any number in the absent array?) Obviously, then, information is there, in the experience itself, whether or not it is, or even can be. As psychologists conclude, there is a limit on the information processing capacities of the latter (cognitive) mechanisms that are not shared by the sensory stages themselves.

Perceptual knowledge is knowledge acquired by or through the senses. This includes most of what we know. Some would say it includes everything we know. We cross intersections when we see the light turn green, head for the kitchen when we smell the roast burning, squeeze the fruit to determine its ripeness, and climb out of bed when we hear the alarm, ring. In each case we come to know something - that the light has turned green, that the roast is burning, that the melon is overripe, and that it is time to get up - that the light has turned green - by use of the eyes. Feeling that the melon is overripe in coming to know a fact - that the melon is overripe - by one’s sense of touch. In each case the resulting knowledge is somehow based on, derived from or grounded in the sort of experience that characterizes the sense modality in question.

Seeing a rotten kumquat is not at all like the experience of smelling, tasting or feeling a rotten kumquat. Yet all these experiences can result in the same knowledge - Knowledge that the kumquat is rotten. Although the experiences are much different, they must, if they are to yield knowledge, embody information about the kumquat: The information that it is rotten. Seeing that the fruit is rotten differs from smelling that it is rotten, not in what is known, but how it is known. In each case, the information has the same source - the rotten kumquat -, but it is, so top speak, delivered via different channels and coded and re-coded in different experiential neuronal excitations as stimulated sense attractions.

It is important to avoid confusing perceptual knowledge of facts, e.g., that the kumquat is rotten, with the perception of objects, e.g., rotten kumquats. It is one thing to see (taste, smell, feel) a rotten kumquat, and quite another to know (by seeing or tasting) that it is a rotten kumquat. Some people, after all, don not know what kumquats to look like. They see a kumquat but do not realize (do not see that) it is a kumquat. Again, some people do not know what a kumquat smell like. They smell a rotten kumquat and - thinking, perhaps, that this is a way this strange fruit is supposed to smell - does not realize from the smell, i.e., do not smell that it is a rotted kumquat. In such cases people see and smell rotten kumquats - and in this sense perceive a rotten kumquat - and never know that they are kumquats - let alone rotten kumquats. They cannot, not at least by seeing and smelling, and not until they have learned something about (rotten) kumquats. Since the topic as such is incorporated in the perceptual knowledge - knowing, by sensory means, that something if ‘F’ -, we will be primary concerned with the question of what more, beyond the perception of F’s, is needed to see that (and thereby know that) they are ‘F’. The question is, however, not how we see kumquats (for even the ignorant can do this) but, how we know (if, that in itself, that we do) that, that is what we see.

Much of our perceptual knowledge is indirect, dependent or derived. By this is that it is meant that the facts we describe ourselves as learning, as coming to know, by perceptual means are pieces of knowledge that depend on our coming to know something else, some other fat, in a more direct way. We see, by the gauge, that we need gas, see, by the newspapers, that our team has lost again, or see, by her expression that is nervous. This derived or dependent sort of obtainable knowledge is particularly prevalent in the case of vision but it occurs, to a lesser degree, in every sense modality. We install bells and other noise-makers so that we can, for example, hear (by the bells) that someone is at the door and (by the alarm) that its time to get away. When we obtain knowledge in this way. It is clear that unless one sees - hence, comes to know. Something about the gauge (that it reads ‘empty’), the newspaper (which is says) and the person’s expression, one would not see (hence, know) what one is described as coming to know by perceptual means. If one cannot hear that the bell is ringing, one cannot - not at least in this way - hear that one’s visitors have arrived. In such cases one sees (hears, smells, etc.) that ‘a’ is ‘F’, coming to know thereby that ‘a’ is ‘F’, by seeing (hearing, and so forth), that some other condition, b’s being ‘G’, obtains. When this occurs, the knowledge (that ‘a’ is ‘F’) is derived, or dependent on, the more basic perceptual knowledge that ‘b’ is ‘G’.

Though perceptual knowledge about objects is often, in this way, dependent on knowledge of fats about different objects, the derived knowledge is sometimes about the same object. That is, we see that ‘a’ is ‘F’ by seeing, not that some other object is ‘G’, but that ‘a’ itself is ‘G’. We see, by her expression, that she is nervous. She tells that the fabric is silk (not polyester) by the characteristic ‘greasy’ feel of the fabric itself (not, as I do, by what is printed on the label). We tell whether it is an oak tree, a Porsche, a geranium, an igneous rock or a misprint by its shape, colour, texture, size, behaviour and distinctive markings. Perceptual knowledge of this sort is also deprived - derived from the more basic facts (about ‘a’) we use to make the identification. In this case the perceptual knowledge is still indirect because, although the same object is involved, the facts we come to know about it are different from the facts that enable us to know it.

Derived knowledge is sometimes described as inferential, but this is misleading, at the conscious level there is no passage of the mind from premise to conclusion, no reasoning, no problem-solving. The observer, the one who sees that ‘a’ is ‘F’ by seeing that ‘b’ (or ‘a’ itself) is ‘G’, need not be (and typically is not) aware of any process of inference, any passage of the mind from one belief to another. The resulting knowledge, though logically derivative, is psychologically immediate. I could see that she was getting angry: so, I moved my hand. I did not, - at least not at any conscious level - infers (from her expression and behaviour) that she was getting angry. I could (or, so it seemed to me) see that she was getting angry. It is this psychological immediacy that makes indirect perceptual knowledge a species of perceptual knowledge.

The psychological immediacy that characterises so much of our perceptual knowledge - even (sometimes) the most indirect and derived forms of it - do not mean that learning is not required to know in this way. One is not born with (may, in fact, never develop) the ability to recognize daffodils, muskrats and angry companions. It is only after a long experience that one is able visually to identify such things. Beginners may do something corresponding to inference: They recognize relevant features of trees, birds, and flowers, factures they already know how to perceptually identify, and then infer (conclude), on the basis of what they see, and under the guidance of more expert observers, that it’s an oak a finch or a geranium. But the experts (and we are all experts on many aspects of our familiar surroundings) do not typically go through such a process. The expert just sees that it’s an oak, a finch or a geranium. The perceptual knowledge of the expert is still dependent, of course, since even an expert cannot see what kind of flower it is if she cannot first see its colour and shape, but it is to say, that the expert has developed identificatory skills that no longer require the sort of conscious inferential process that characterize a beginner’s efforts.

Coming to know that ‘a’ is ‘F’ by seeing that ‘b’ is ‘G’ obviously requires some background assumption on the part of the observer, an assumption to the effect that ‘a’ is ‘F’ (or perhaps only probable ‘F’) when ‘b’ is ‘G’. If one does not assume (as taken to be granted) that the gauge is properly connected, and does not, thereby assume that it would not register ‘empty’, unless the tank was nearly empty, then even if one could see that it registered ‘empty’, one would not learn (hence, would not see) that one needed gas. At least, one would not see it by consulting the gauge. Likewise, in trying to identify birds, its no use being able to see their markings if one doesn’t know something about which birds have which marks - sometimes of the form: A bird with these markings is (probably) a finch.

It would seem, moreover, that these background assumptions, if they are to yield knowledge that ‘a’ is ‘F’, as they must if the observer is to see (by b’s being ‘G’) that ‘a’ is ‘F’, must they qualify as knowledge. For if this background fact is not known, if it is not known whether ‘a’ is ‘F’ when ‘b’ is ‘G’, then the knowledge of b’s being ‘G’, taken by itself, powerless to generate the knowledge that ‘a; is ‘F?’. If the conclusion is to be known to be true, both the premises used to reach that conclusion must be known to be true. Or so it would seem.

Externalism/Internalism are most generally accepted of this distinction if that a theory of justification is internalist, if and only if it requires that all of the factors needed for a belief to be epistemically justified for a given person be cognitively accessible to that person. Internal to his cognitive perspective, and external, if it allows that, at least, part of the justifying factor need not be thus accessible, so they can be external to the believers’ cognitive perspective, beyond his understanding. As complex issues well beyond our perception to the knowledge or an understanding, however, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering any very explicit explication.

The externalism/internalism distinction has been mainly applied to theories of epistemic justification. It has also been applied in a closely related way to accounts of knowledge and in a rather different way to accounts of belief and thought content.

Externalism/Internalism are most generally accepted of this distinction if that a theory of justification is internalist, if and only if it requires that all of the factors needed for a belief to be epistemically justified for a given person be cognitively accessible to that person. Internal to his cognitive perspective, and external, if it allows that, at least, part of the justifying factor need not be thus accessible, so they can be external to the believers’ cognitive perspective, beyond his understanding. As complex issues well beyond our perception to the knowledge or an understanding, however, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering any very explicit explication.

It should be carefully noticed that when internalism is construed by either that the justifying factors literally are internal mental states of the person or that the internalism. On whether actual awareness of the justifying elements or only the capacity to become aware of them is required, comparatively, the consistency and usually through a common conformity brings upon some coherenists views that could also be internalist, if both the belief and other states with which a justification belief is required to cohere and the coherence relations themselves are reflectively accessible. In spite of its apparency, it is necessary, because on at least some views, e.g., a direct realist view of perception, something other than a mental state of the believer can be cognitively accessible, not sufficient, because there are views according to which at least, some mental states need not be actual (strong versions) or even possible (weak versions) objects of cognitive awareness.

An alterative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is especially given to some externalists account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., is a result of a reliable process, and, perhaps, further conditions as well. This makes it possible for such a view to retain an internalist account of epistemic justification, though the centralities are seriously diminished. Such an externalist account of knowledge can accommodate the common-sense conviction that animals, young children and unsophisticated adults possess knowledge though not the weaker conviction that such individuals are epistemically justified in their belief. It is also, at least. Vulnerable to internalist counter-examples, since the intuitions involved there pertains more clearly to justification than to knowledge, least of mention, as with justification and knowledge, the traditional view of content has been strongly internalist in character. An objection to externalist accounts of content is that they seem unable to do justice to our ability to know the content of our beliefs or thoughts ‘from the inside’, simply by reflection. So, then, the adoption of an externalist account of mental content would seem as if part of all of the content of a belief is inaccessible to the believer, then both the justifying status of other beliefs in relation to that content and the status of the content as justifying further beliefs will be similarly inaccessible, thus contravening the internalist requirements for justification.

Nevertheless, a standard psycholinguistic theory, for instance, hypothesizes the construction of representations of the syntactic structures of the utterances one hears and understands. Yet we are not aware of, and non-specialists do not even understand, the structures represented. Thus, cognitive science may attribute thoughts where common sense would not. Second, cognitive science may find it useful to individuate thoughts in ways foreign to common sense.

The representational theory of cognition gives rise to a natural theory of intentional stares, such as believing, desiring and intending. According to this theory, intentional state factors are placed into two aspects: A ‘functional’ aspect that distinguishes believing from desiring and so on, and a ‘content’ aspect that distinguishes belief from each other, desires from each other, nd so on. A belief that ‘p’ might be realized as a representation with which the conceptual progress might find in itself the content that ‘p’ and the dynamical function for serving its premise in theoretical presupposition of some sort of act, in which desire forces us beyond in what is desire. Especially attributive to some act of ‘p’ that, if at all probable the enactment might be realized as a representation with contentual representation of ‘p’, and finally, the functional dynamic in representation of, least of mention, the struggling of self-ness for which may suppositiously proceed by there being some designated vicinity for which such a point that ‘p’ and discontinuing such processing when a belief that ‘p‘ is formed.

A great deal of philosophical effort has been lavished on the attempt to naturalize content, i.e., to explain in non-semantic, non-intentional terms what it is for something to be a representation (have content), and what it is for something to have some particular content than some other. There appear to be only four types of theory that have been proposed: Theories that ground representation in (1) similarity, (2) covariance, (3) functional roles, (4) teleology.

Similar theories had that ‘r’ represents ‘x’ in virtue of being similar to ‘x’. This has seemed hopeless to most as a theory of mental representation because it appears to require that things in the brain must share properties with the things they represent: To represent a cat as furry appears to require something furry in the brain. Perhaps a notion of similarity that is naturalistic and does not involve property sharing can be worked out, but it is not obviously how.

Covariance theories hold that r’s represent ‘x’ is grounded in the fact that r’s occurrence covaries with that of ‘x’. This is most compelling when one thinks about detection systems: The firing neuron structure in the visual system is said to represent vertical orientations if its firing covaries with the occurrence of vertical lines in the visual field. Dretske (1981) and Fodor (1987), has in different ways, attempted to promote this idea into a general theory of content.

‘Content’ has become a technical term in philosophy for whatever it is a representation has that makes it semantically evaluable. Thus, a statement is sometimes said to have a proposition or truth condition s its content: a term is sometimes said to have a concept as its content. Much less is known about how to characterize the contents of non-linguistic representations than is known about characterizing linguistic representations. ‘Content’ is a useful term precisely because it allows one to abstract away from questions about what semantic properties representations have: a representation’s content is just whatever it is that underwrites its semantic evaluation.

Likewise, functional role theories hold that r’s representing ‘χ’ is grounded in the functional role ‘r’ has in the representing system, i.e., on the relations imposed by specified cognitive processes between ‘r’ and other representations in the system’s repertoire. Functional role theories take their cue from such common sense ideas as that people cannot believe that cats are furry if they do not know that cats are animals or that fur is like hair.

What is more that theories of representational content may be classified according to whether they are atomistic or holistic and according to whether they are externalistic or internalistic? The most generally accepted account of this distinction is that a theory of justification is internalist if and only if it requires that all of the factors needed for a belief to be epistemically justified for a given person be cognitively accessible to that person, internal to his cognitive perspective, and externalist, if it allows hast at least some of the justifying factors need not be thus accessible, so that they can be external to the believer’s cognitive perspective, beyond his ken. However, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering and very explicit explications.

Atomistic theories take a representation’s content to be something that can be specified independently of that representation’s relations to other representations. What Fodor (1987) calls the crude causal theory, for example, takes a representation to be a
cow
-a mental representation with the same content as the word ‘cow’-if its tokens are caused by instantiations of the property of being-a-cow, and this is a condition that places no explicit constraint on how
cow
’s must or might relate to other representations.

The syllogistic, or categorical syllogism is the inference of one proposition from two premises. For example is, ‘all horses have tails, and things with tails are four legged, so all horses are four legged. Each premise has one term in common with the other premises. The terms that do not occur in the conclusion are called the middle term. The major premise of the syllogism is the premise containing the predicate of the contraction (the major term). And the minor premise contains its subject (the minor term), justly as commended of the first premise of the example, in the minor premise the second the major term, so the first premise of the example is the minor premise, the second the major premise and ‘having a tail’ is the middle term. This enables syllogisms that there of a classification, that according to the form of the premises and the conclusions. The other classification is by figure, or way in which the middle term is placed or way in within the middle term is placed in the premise.

Although the theory of the syllogism dominated logic until the 19th century, it remained a piecemeal affair, able to deal with only relations valid forms of valid forms of argument. There have subsequently been rearguing actions attempting, but in general it has been eclipsed by the modern theory of quantification, the predicate calculus is the heart of modern logic, having proved capable of formalizing the calculus rationing processes of modern mathematics and science. In a first-order predicate calculus the variables range over objects: In a higher-order calculus the might range over predicate and functions themselves. The fist-order predicated calculus with identity includes ‘=’ as primitive (undefined) expression: In a higher-order calculus. It may be defined by law that χ = y iff (∀F)(Fχ-Fy), which gives greater expressive power for less complexity.

Modal logic was of great importance historically, particularly in the light of the deity, but was not a central topic of modern logic in its gold period as the beginning of the 20th century. It was, however, revived by the American logician and philosopher Irving Lewis (1883-1964), although he wrote extensively on most central philosophical topis, he is remembered principally as a critic of the intentional nature of modern logic, and as the founding father of modal logic. His independent proofs worth showing that from a contradiction anything follows its parallelled logic, using a notion of entailment stronger than that of strict implication.

The imparting information has been conducted or carried out the prescribed conventions, as unsettling formalities that blend upon the plexuities of circumstance. Taking to place in the folly of depending contingences, if only to secure in the possibilities that outlook of entering one’s mind, this may arouse of what is proper or acceptable in the interests of applicability, that from time to time of increasingly forwarded as it’s placed upon the occasion that various doctrines concerning the necessary properties are themselves represented. By an arbiter or a conventional device used for adding to a prepositional or predicated calculus, for its additional rationality that two operators, □ and ◊ (sometimes written ‘N’ and ‘M’), meaning necessarily and possible, respectfully, impassively composed in the collective poise as: p ➞ ◊p and □p ➞ p will be wanted to have as a duty or responsibility. Controversial these include □p ➞ □□p, if a proposition is necessary. It’s necessarily, characteristic of a system known as S4, and ◊p ➞ □◊p (if as preposition is possible, it’s necessarily possible, characteristic of the system known as S5). In classical modal realism, the doctrine advocated by David Lewis (1941-2002), that different possible worlds care to be thought of as existing exactly as this one does. Thinking in terms of possibilities is thinking of real worlds where things are different. The view has been charged with making it impossible to see why it is good to save the child from drowning, since there is still a possible world in which she for her counterpart. Saying drowned, is spoken from the standpoint of the universe that it should make no difference which world is actual. Critics also charge that the notion fails to fit either with a coherent Theory of how we know about possible worlds, or with a coherent theory of why we are interested in them, but Lewis denied that any other way of interpreting modal statements is tenable.

Saul Kripke (1940-), the American logician and philosopher contributed to the classical modern treatment of the topic of reference, by its clarifying distinction between names and definite description, and opening the door to many subsequent attempts to understand the notion of reference in terms of a causal link between the use of a term and an original episode of attaching a name to the subject.

One of the three branches into which ‘semiotic’ is usually divided, the study of semantical meaning of words, and the relation of signs to the degree to which the designs are applicable, in that, in formal studies, semantics is provided for by a formal language when an interpretation of ‘model’ is specified. However, a natural language comes ready interpreted, and the semantic problem is not that of the specification but of understanding the relationship between terms of various categories (names, descriptions, predicate, adverbs . . . ) and their meaning. An influential proposal by attempting to provide a truth definition for the language, which will involve giving a full structure of different kinds has on the truth conditions of sentences containing them.

Holding that the basic case of reference is the relation between a name and the persons or objective worth which it names, its philosophical problems include trying to elucidate that relation, to understand whether other semantic relations, such s that between a predicate and the property it expresses, or that between a description of what it describes, or that between me and the word ‘I’, are examples of the same relation or of very different ones. A great deal of modern work on this was stimulated by the American logician Saul Kripke’s, Naming and Necessity (1970). It would also be desirable to know whether we can refer to such things as objects and how to conduct the debate about each and issue. A popular approach, following Gottlob Frége, is to argue that the fundamental unit of analysis should be the whole sentence. The reference of a term becomes a derivative notion it is whatever it is that defines the term’s contribution to the trued condition of the whole sentence. There need be nothing further to say about it, given that we have a way of understanding the attribution of meaning or truth-condition to sentences. Other approachable searching allotted for an increasing substantive possibility, that causality or psychological or social constituents have stated by announcements between words and things.

However, following Ramsey and the Italian mathematician G. Peano (1858-1932), it has been customary to distinguish logical paradoxes that depend upon a notion of reference or truth (semantic notions) such as those of the ‘Liar family’, which form the purely logical paradoxes in which no such notions are involved, such as Russell’s paradox, or those of Canto and Burali-Forti. Paradoxes of the fist type seem to depend upon an element of a self-reference, in which a sentence is about itself, or in which a phrase refers to something about itself, or in which a phrase refers to something defined by a set of phrases of which it is itself one. Reason-sensitivities are said that this element is responsible for the contradictions, although mind’s reconsiderations are often apposably benign. For instance, the sentence ‘All English sentences should have a verb’, this includes itself in the domain of sentences, such that it is talking about. So, the difficulty lies in forming a condition that existence can only be considered of allowing to set theory to proceed by circumventing the latter paradoxes by technical means, even when there is no solution to the semantic paradoxes, it may be a way of ignoring the similarities between the two families. There is still the possibility that while there is no agreed solution to the semantic paradoxes. Our understanding of Russell’s paradox may be imperfect as well.

Truth and falsity are two classical truth-values that a statement, proposition or sentence can take, as it is supposed in classical (two-valued) logic, that each statement has one of these values, and ‘non’ has both. A statement is then false if and only if it is not true. The basis of this scheme is that to each statement there corresponds a determinate truth condition, or way the world must be for it to be true: If this condition obtains, the statement is true, and otherwise false. Statements may indeed be felicitous or infelicitous in other dimensions (polite, misleading, apposite, witty, etc.) but truth is the central normative notion governing assertion. Considerations of vagueness may introduce greys into this black-and-white scheme. For the issue to be true, any suppressed premise or background framework of thought necessary makes an agreement valid, or a tenable position, as a proposition whose truth is necessary for either the truth or the falsity of another statement. Thus if ‘p’ presupposes ‘q’, ‘q’ must be true for ‘p’ to be either true or false. In the theory of knowledge, the English philosopher and historian George Collingwood (1889-1943), announces that any proposition capable of truth or falsity stands on of ‘absolute presuppositions’ which are not properly capable of truth or falsity, since a system of thought will contain no way of approaching such a question (a similar idea later voiced by Wittgenstein in his work On Certainty). The introduction of presupposition therefore means that either another of a truth value is found, ‘intermediate’ between truth and falsity, or the classical logic is preserved, but it is impossible to tell whether a particular sentence empresses a preposition that is a candidate for truth and falsity, without knowing more than the formation rules of the language. Each suggestion directionally imparts as to convey there to some consensus that at least whowhere definite descriptions are involved, examples equally given by regarding the overall sentence as false as the existence claim fails, and explaining the data that the English philosopher Frederick Strawson (1919-) relied upon as the effects of ‘implicature’.

Views about the meaning of terms will often depend on classifying the implicature of sayings involving the terms as implicatures or as genuine logical implications of what is said. Implicatures may be divided into two kinds: Conversational implicatures of the two kinds and the more subtle category of conventional implicatures. A term may as a matter of convention carry and implicature. Thus, one of the relations between ‘he is poor and honest’ and ‘he is poor but honest’ is that they have the same content (are true in just the same conditional) but the second has implicatures (that the combination is surprising or significant) that the first lacks.

It is, nonetheless, that we find in classical logic a proposition that may be true or false. In that, if the former, it is said to take the truth-value true, and if the latter the truth-value false. The idea behind the terminological phrases is the analogue between assigning a propositional variable one or other of these values, as is done in providing an interpretation for a formula of the propositional calculus, and assigning an object as the value of any other variable. Logics with intermediate value are called ‘many-valued logics’.

Nevertheless, an existing definition of the predicate’ . . . is true’ for a language that satisfies convention ‘T’, the material adequately condition laid down by Alfred Tarski, born Alfred Teitelbaum (1901-83), whereby his methods of ‘recursive’ definition, enabling us to say for each sentence what it is that its truth consists in, but giving no verbal definition of truth itself. The recursive definition or the truth predicate of a language is always provided in a ‘metalanguage’, Tarski is thus committed to a hierarchy of languages, each with it’s associated, but different truth-predicate. While this enables an easier approach to avoid the contradictions of paradoxical contemplations, it yet conflicts with the idea that a language should be able to say everything that there is to say, and other approaches have become increasingly important.

So, that the truth condition of a statement is the condition for which the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although this sounds as if it gives a solid anchorage for meaning, some of the securities disappear when it turns out that the truth condition can only be defined by repeating the very same statement: The truth condition of ‘now is white’ is that ‘snow is white’, the truth condition of ‘Britain would have capitulated had Hitler invaded’, is that ‘Britain would have capitulated had Hitler invaded’. It is disputed whether this element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantive theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to use it in a network of inferences.

Taken to be the view, inferential semantics takes upon the role of a sentence in inference, and gives a more important key to their meaning than this ‘external’ relations to things in the world. The meaning of a sentence becomes its place in a network of inferences that it legitimates. Also known as functional role semantics, procedural semantics, or conception to the coherence theory of truth, and suffers from the same suspicion that it divorces meaning from any clar association with things in the world.

Moreover, a theory of semantic truth is that of the view if language is provided with a truth definition, there is a sufficient characterization of its concept of truth, as there is no further philosophical chapter to write about truth: There is no further philosophical chapter to write about truth itself or truth as shared across different languages. The view is similar to the disquotational theory.

The redundancy theory, or also known as the ‘deflationary view of truth’ fathered by Gottlob Frége and the Cambridge mathematician and philosopher Frank Ramsey (1903-30), who showed how the distinction between the semantic paradoses, such as that of the Liar, and Russell’s paradox, made unnecessary the ramified type theory of Principia Mathematica, and the resulting axiom of reducibility. By taking all the sentences affirmed in a scientific theory that use some terms, e.g., quarks, and to a considerable degree of replacing the term by a variable instead of saying that quarks have such-and-such properties, the Ramsey sentence says that there is something that has those properties. If the process is repeated for all of a group of the theoretical terms, the sentence gives ‘topic-neutral’ structure of the theory, but removes any implication that we know what the terms so administered to advocate. It leaves open the possibility of identifying the theoretical item with whatever, but it is that best fits the description provided. However, it was pointed out by the Cambridge mathematician Newman, that if the process is carried out for all except the logical bones of a theory, then by the Löwenheim-Skolem theorem, the result will be interpretable, and the content of the theory may reasonably be felt to have been lost.

For in part, while, both Frége and Ramsey are agreeing that the essential claim is that the predicate’ . . . is true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophical enquiry. The approach admits of different versions, but centres on the points (1) that ‘it is true that ‘p’ says no more nor less than ‘p’ (hence, redundancy): (2) that in less direct context, such as ‘everything he said was true’, or ‘all logical consequences of true propositions are true’, the predicate functions as a device enabling us to generalize than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from a true preposition. For example, the second may translate as ‘(∀p, q)(p & p ➞ q ➞ q)’ where there is no use of a notion of truth.

There are technical problems in interpreting all uses of the notion of truth in such ways, nevertheless, they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as ‘science aims at the truth’, or ‘truth is a norm governing discourse’. Postmodern writing frequently advocates that we must abandon such norms, along with a discredited ‘objective’ conception of truth, perhaps, we can have the norms even when objectivity is problematic, since they can be framed without mention of truth: Science wants it to be so that whatever science holds that ‘p’, then ‘p’. Discourse is to be regulated by the principle that it is wrong to assert ‘p’, when ‘not-p’.

Something that tends of something in addition of content, or coming by way to justify such a position can very well be more that in addition to several reasons, as to bring in or adjoin of something might that there be more so as to a larger combination for us to consider the simplest formulation, that of corresponding to real and known facts. Therefore, it is to our belief for being true and right in the demand for something as one’s own or one’s due to its call for the challenge and maintains a contentually warranted demand, least of mention, it is adduced to forgo a defendable right of contending is a ‘real’ or assumed placement to defend his greatest claim to fame. Claimed that expression of the attached adherently following the responsive quality values as explicated by the body of people who attaches them to another epically as disciplines, patrons or admirers, after all, to come after in time follows to go after or on the track of one who attaches himself to another, might one to succeed successively to the proper lineage of the modelled composite of ‘S’ is true, which is to mean that the same as an induction or enactment into being its expression from something hided. Latently, to be educed by some stimulated arousal would prove to establish a point by appropriate objective means by which the excogitated form of ‘S’. Some philosophers dislike the ideas of sameness of meaning, and if this I disallowed, then the claim is that the two forms are equivalent in any sense of equivalence that matters. This is, it makes no difference whether people say ‘Dogs bark’ is true, or whether they say, ‘dogs bark’. In the former representation of what they say of the sentence ‘Dogs bark’ is mentioned, but in the later it appears to be used, of the claim that the two are equivalent and needs careful formulation and defence. On the face of it someone might know that ‘Dogs bark’ is true without knowing what it means (for instance, if he kids in a list of acknowledged truths, although he does not understand English), and this is different from knowing that dogs bark. Disquotational theories are usually presented as versions of the ‘redundancy theory of truth’. Whereby, the simplest formulation is the claim that expressions of the outward appearance of something as distinguished from the substance of which it is made, is that by an external control, as custom or formal protocol of procedure would construct in fashion in a designed profile as appointed by ‘S’ are true, which properly means the same as the expressions belonging of countenance that the proceeding regularity, of a set of modified preparation forwarded use of fixed or accepted way doing or sometimes of expressing something measurable by the presence of ‘S’. That is, it causes to engender the actualization of the exaggerated illustrations, as if to make by its usage.

Seemingly, no difference of whether people say ‘Dogs bark’ is true, or whether they say, dogs bark, in the former representation of what they say the sentence presentation of what that say the sentence ‘Dogs bark’ is mentioned, but, the claim that the two appears to use, so the clam that the two are equivalent needs careful formulation and defence, least of mention, that disquotational theories are usually presented as versions of the ‘redundancy theory of truth’.

The relationship between a set of premises and a conclusion when the conclusion follows from the premise, as several philosophers identify this with it being logically impossible that the premises should all be true, yet the conclusion false. Others are sufficiently impressed by the paradoxes of strict implication to look for a stranger relation, which would distinguish between valid and invalid arguments within the sphere of necessary propositions. The seraph for a strange notion is the field of relevance logic.

From a systematic theoretical point of view, we may imagine the process of evolution of an empirical science to be a continuous process of induction. Theories are evolved and are expressed in short compass as statements of as large number of individual observations in the form of empirical laws, from which the general laws can be ascertained by comparison. Regarded in this way, the development of a science bears some resemblance to the compilation of a classified catalogue. It is, a purely empirical enterprise.

But this point of view by no means embraces the whole of the actual process, for it overlooks the important part played by intuition and deductive thought in the development of an exact science. As soon as a science has emerged from its initial stages, theoretical advances are no longer achieved merely by a process of arrangement. Guided by empirical data, the examiners develop a system of thought which, in general, it is built up logically from a small number of fundamental assumptions, the so-called axioms. We call such a system of thought a ‘theory’. The theory finds the justification for its existence in the fact that it correlates a large number of single observations, and is just here that the ‘truth’ of the theory lies.

Corresponding to the same complex of empirical data, there may be several theories, which differ from one another to a considerable extent. But as regards the deductions from the theories which are capable of being tested, the agreement between the theories may be so complete, that it becomes difficult to find any deductions in which the theories differ from each other. As an example, a case of general interest is available in the province of biology, in the Darwinian theory of the development of species by selection in the struggle for existence, and in the theory of development which is based on the hypophysis of the hereditary transmission of acquired characters. The Origin of Species was principally successful in marshalling the evidence for evolution, than providing a convincing mechanism for genetic change. And Darwin himself remained open to the search for additional mechanisms, while also remaining convinced that natural selection was at the hart of it. It was only with the later discovery of the gene as the unit of inheritance that the synthesis known as ‘neo-Darwinism’ became the orthodox theory of evolution in the life sciences.

In the 19th century the attempt to base ethical reasoning o the presumed facts about evolution, the movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903), the premise is that later elements in an evolutionary path are better than earlier ones: The application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more ‘primitive’ social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called ‘social Darwinism’ emphasises the struggle for natural selection, and draws the conclusion that we should glorify and assist such struggles are usually by enhancing competition and aggressive relations between people in society or between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.

Once, again, psychological attempts are found to establish a point by appropriate objective means, in that their evidences are well substantiated within the realm of evolutionary principles, in which a variety of higher mental functions may be adaptations, forced in response to selection pressures on the human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capacities for love and friendship, the development of language as a signalling system cooperative and aggressive, our emotional repertoire, our moral and reactions, including the disposition to detect and punish those who cheat on agreements or who ‘free-ride’ on the work of others, our cognitive structures, nd many others. Evolutionary psychology goes hand-in-hand with neurophysiological evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify. The approach was foreshadowed by Darwin himself, and William James, as well as the sociology of E.O. Wilson. The terms of use are applied, more or less aggressively, especially to explanations offered in sociobiology and evolutionary psychology.

Another assumption that is frequently used to legitimate the real existence of forces associated with the invisible hand in neoclassical economics derives from Darwin’s view of natural selection as a war-like competing between atomized organisms in the struggle for survival. In natural selection as we now understand it, cooperation appears to exist in complementary relation to competition. Complementary relationships between such results are emergent self-regulating properties that are greater than the sum of parts and that serve to perpetuate the existence of the whole.

According to E.O Wilson, the ‘human mind evolved to believe in the gods’’ and people ‘need a sacred narrative’ to have a sense of higher purpose. Yet it is also clear that the unspoken ‘gods’’ in his view are merely human constructs and, therefore, there is no basis for dialogue between the world-view of science and religion. ‘Science for its part’, said Wilson, ‘will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral and religious sentiment. The eventual result of the competition between each other, will be the secularization of the human epic and of religion itself.

Man has come to the threshold of a state of consciousness, regarding his nature and his relationship to te Cosmos, in terms that reflect ‘reality’. By using the processes of nature as metaphorically described forces by which it operates upon and within Man, we come as close to describing ‘reality’ as we can within the limits of our comprehension. Men will be very uneven in their capacity for such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons it will always be necessary to use metaphor and myth to provide ‘comprehensible’ guides to living in this way. Man’s imagination and intellect play vital roles on his survival and evolution.

Since so much of life both inside and outside the study is concerned with finding explanations of things, it would be desirable to have a concept of what counts as a good explanation from bad. Under the influence of ‘logical positivist’ approaches to the structure of science, it was felt that the criterion ought to be found in a definite logical relationship between the ‘exlanans’ (that which does the explaining) and the explanandum (that which is to be explained). The approach culminated in the covering law model of explanation, or the view that an event is explained when it is subsumed under a law of nature, that is, its occurrence is deducible from the law plus a set of initial conditions. A law would itself be explained by being deduced from a higher-order or covering law, in the way that Johannes Kepler(or Keppler, 1571-1630), was by way of planetary motion that the laws were deducible from Newton’s laws of motion. The covering law model may be adapted to include explanation by showing that something is probable, given a statistical law. Questions for the covering law model include querying for the covering laws are necessary to explanation (we explain whether everyday events without overtly citing laws): Querying whether they are sufficient (it may not explain an event just to say that it is an example of the kind of thing that always happens). And querying whether a purely logical relationship is adapted to capturing the requirements, which we make of explanations, and these may include, for instance, that we have a ‘feel’ for what is happening, or that the explanation proceeds in terms of things that are familiar to us or unsurprising, or that we can give a model of what is going on, and none of these notions is captured in a purely logical approach. Recent work, therefore, has tended to stress the contextual and pragmatic elements in requirements for explanation, so that what counts as good explanation given one set of concerns may not do so given another.

The argument to the best explanation is the view that once we can select the best of any in something in explanations of an event, then we are justified in accepting it, or even believing it. The principle needs qualification, since something it is unwise to ignore the antecedent improbability of a hypothesis which would explain the data better than others, e.g., the best explanation of a coin falling heads 530 times in 1,000 tosses might be that it is biassed to give a probability of heads of 0.53 but it might be more sensible to suppose that it is fair, or to suspend judgement.

In a philosophy of language is considered as the general attempt to understand the components of a working language, the relationship the understanding speaker has to its elements, and the relationship they bear to the world. The subject therefore embraces the traditional division of semiotic into syntax, semantics, and pragmatics. The philosophy of language thus mingles with the philosophy of mind, since it needs an account of what it is in our understanding that enables us to use language. It so mingles with the metaphysics of truth and the relationship between sign and object. Much as much is that the philosophy in the 20th century, has been informed by the belief that philosophy of language is the fundamental basis of all philosophical problems, in that language is the distinctive exercise of mind, and the distinctive way in which we give shape to metaphysical beliefs. Particular topics will include the problems of logical form. And the basis of the division between syntax and semantics, as well as problems of understanding the ‘number’ and naturally specific semantic relationships such as meaning, reference, predication, and quantification, the glimpse into the pragmatics includes that of speech acts, nonetheless problems of rule following and the indeterminacy of translation infect philosophies of both pragmatics and semantics.

On this conception, to understand a sentence is to know its truth-conditions, and, yet, in a distinctive way the conception has remained central that those who offer opposing theories characteristically define their position by reference to it. The Concepcion of meaning s truth-conditions needs not and ought not be advanced for being in itself as complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts contextually performed by the various types of the sentence in the language, and must have some idea of the insufficiencies of various kinds of speech acts. The claim of the theorist of truth-conditions should rather be targeted on the notion of content: If indicative sentences differ in what they strictly and literally say, then this difference is fully accounted for by the difference in the truth-conditions.

The meaning of a complex expression is a function of the meaning of its constituent. This is just as a sentence of what it is for an expression to be semantically complex. It is one of the initial attractions of the conception of meaning truth-conditions that it permits a smooth and satisfying account of the way in which the meaning of s complex expression is a function of the meaning of its constituents. On the truth-conditional conception, to give the meaning of an expression is to state the contribution it makes to the truth-conditions of sentences in which it occurs. For singular terms-proper names, indexical, and certain pronouns-this is done by stating the reference of the terms in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating the conditions under which arbitrary atomic sentences containing it is true. The meaning of a sentence-forming operator is given by stating its contribution to the truth-conditions of as complex sentence, as a function of the semantic values of the sentences on which it operates.

The theorist of truth conditions should insist that not every true statement about the reference of an expression is fit to be an axiom in a meaning-giving theory of truth for a language, such is the axiom: ‘London’ refers to the city in which there was a huge fire in 1666, is a true statement about the reference of ‘London’. It is a consequent of a theory which substitutes this axiom for no different a term than of our simple truth theory that ‘London is beautiful’ is true if and only if the city in which there was a huge fire in 1666 is beautiful. Since a psychological subject can understand, the given name to ‘London’ without knowing that last-mentioned truth condition, this replacement axiom is not fit to be an axiom in a meaning-specifying truth theory. It is, of course, incumbent on a theorised meaning of truth conditions, to state in a way which does not presuppose any previous, non-truth conditional conception of meaning. Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental. First, the theorist has to answer the charge of triviality or vacuity, second, the theorist must offer an account of what it is for a person’s language to be truly describable by as semantic theory containing a given semantic axiom.

Since the content of a claim that the sentence, ‘Paris is beautiful’ is the true amount to nothing more than the claim that Paris is beautiful, we can trivially describers understanding a sentence, if we wish, as knowing its truth-conditions, but this gives us no substantive account of understanding whatsoever. Something other than the grasp of truth conditions must provide the substantive account. The charge rests upon what has been called the redundancy theory of truth, the theory which, somewhat more discriminatingly. Horwich calls the minimal theory of truth. It’s conceptual representation that the concept of truth is exhausted by the fact that it conforms to the equivalence principle, the principle that for any proposition ‘p’, it is true that ‘p’ if and only if ‘p’. Many different philosophical theories of truth will, with suitable qualifications, accept that equivalence principle. The distinguishing feature of the minimal theory is its claim that the equivalence principle exhausts the notion of truth. It is now widely accepted, both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both minimal theory of truth and a truth conditional account of meaning. If the claim that a sentence ‘Paris is beautiful’ is true is exhausted by its equivalence to the claim that Paris is beautiful, it is circular to try of its truth conditions. The minimal theory of truth has been endorsed by the Cambridge mathematician and philosopher Plumpton Ramsey (1903-30), and the English philosopher Jules Ayer, the later Wittgenstein, Quine, Strawson and Horwich and-confusing and inconsistently if this article is correct-Frége himself. But is the minimal theory correct?

The minimal theory treats instances of the equivalence principle as definitional of truth for a given sentence, but in fact, it seems that each instance of the equivalence principle can itself be explained. The truth from which such instances as, ‘London is beautiful’ is true if and only if London is beautiful. This would be a pseudo-explanation if the fact that ‘London’ refers to London consists in part in the fact that ‘London is beautiful’ has the truth-condition it does. But it is very implausible, it is, after all, possible for apprehending and for its understanding of the name ‘London’ without understanding the predicate ‘is beautiful’.

Sometimes, however, the counterfactual conditional is known as subjunctive conditionals, insofar as a counterfactual conditional is a conditional of the form if ‘p’ were to happen ‘q’ would, or if ‘p’ were to have happened ‘q’ would have happened, where the supposition of ‘p’ is contrary to the known fact that ‘not-p’. Such assertions are nevertheless, useful ‘if you broke the bone, the X-ray would have looked different’, or ‘if the reactor was to fail, this mechanism would click in’ are important truths, even when we know that the bone is not broken or are certain that the reactor will not fail. It is arguably distinctive of laws of nature that yield counterfactuals (‘if the metal were to be heated, it would expand’), whereas accidentally true generalizations may not. It is clear that counterfactuals cannot be represented by the material implication of the propositional calculus, since that conditionals come out true whenever ‘p’ is false, so there would be no division between true and false counterfactuals.

Although the subjunctive form indicates the counterfactual, in many contexts it does not seem to matter whether we use a subjunctive form, or a simple conditional form: ‘If you run out of water, you will be in trouble’ seems equivalent to ‘if you were to run out of water, you would be in trouble’, in other contexts there is a big difference: ‘If Oswald did not kill Kennedy, someone else did’ is clearly true, whereas ‘if Oswald had not killed Kennedy, someone would have’ is most probably false.

The best-known modern treatment of counterfactuals is that of David Lewis, which evaluates them as true or false according to whether ‘q’ is true in the ‘most similar’ possible worlds to ours in which ‘p’ is true. The similarity-ranking this approach is needed to prove of the controversial, particularly since it may need to presuppose some notion of the same laws of nature, whereas art of the interest in counterfactual is that they promise to illuminate that notion. There is an expanding force of awareness that the classification of conditionals is an extremely tricky business, and categorizing them as counterfactual or not that it is of limited use.

The pronouncing of any conditional, preposition of the form, if ‘p’ then ‘q’, the condition hypothesizes, ‘p’. It’s called the antecedent of the conditional, and ‘q’ the consequent. Various kinds of conditional have been distinguished. Weaken in that of material implication, merely telling us that with ‘not-p’ or ‘q’, stronger conditionals include elements of modality, corresponding to the thought that if ‘p’ is true then ‘q’ must be true. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether, yielding different kinds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning which case there should be one basic meaning, with surface differences arising from other implicatures.

Placidly, there are many forms of Reliabilism, that among reliabilist theories of justification as opposed to knowledge, three are two main varieties: Reliable indicators’ theories and reliable process theories. In their simplest forms, the reliable indicator theory says that a belief is justified in case it is based on reasons that are reliable indicators of the truth, and the reliable process theory says that a belief is justified in case it is produced by cognitive processes that are generally reliable.

The reliable process theory is grounded on two main points. First, the justificational status of a belief depends on the psychological processes that cause, or causally sustain it, not simply on the logical status f the proposition, or its evidential relation of the proposition, or its evidential relation to other propositions. Even a tautology can have actuality or reality, as I think, therefore I am, is to be worthy of belief, to have a firm conviction in the reality of something, even if there is a belief in ghosts. A matter of acceptance is prerequisite for believing in the unjustifiability, however, if one arrives at that belief through inappropriately psychological possesses, is similarly, detected, one might have a body of evidence supporting the hypothesis that Mr. Notgot is guilty. Nonetheless, if the detective is to put the pieces of evidence together, and instead believes in Mr. Notgot’s guilt only because of his unsavory appearance, the detective’s belief is unjustified. The critical determinants of justification status, is, then, the perception, memory, reasoning, guessing, or introspecting.

Just as there are many forms of ‘Foundationalism’ and ‘coherence’. How is Reliabilism related to these other two theories of justification? We usually regard it as a rival, and this is aptly so, insofar as Foundationalism and coherentism traditionally focused on purely evidential relations than psychological processes, but we might also offer reliabilism as a deeper-level theory, subsuming some precepts of either Foundationalism or coherentism. Foundationalism says that there are ‘basic’ beliefs, which acquire justification without dependence on inference, reliabilism might rationalize this indicating that reliable non-inferential processes have formed the basic beliefs. Coherence stresses the primary systematicity in all doxastic decision-making, its Reliabilism, whose view in epistemology that follows the suggestion that a subject may know a proposition ‘p’ of (1) ‘p’ is true, (2) the subject believes ‘p’‘: And (3), the belief that ‘p’ is the result of some reliable process of belief formation. As the suggestion stands, it is open to counter-examples: A belief may be the result of some generally reliable process which was in fact malfunctioning on this occasion, and we would be reluctant to attribute knowledge to the subject if this were so, although the definition would be satisfied. Reliabilism pursues appropriate modifications to avoid the problem without giving up the general approach. Might, in effect come into being through the causality as made by yielding to spatial temporalities, as for pointing to increases in reliability that accrue from systematicity consequently? Reliabilism could complement Foundationalism and coherence than completed with them, as these examples make it seem likely that, if there is a criterion for what makes an alternate situation relevant that will save Goldman’s claim about local reliability and knowledge. Will did not be simple. The interesting thesis that counts as a causal theory of justification, in the making of ‘causal theory’ intended for the belief as it is justified in case it was produced by a type of process that is ‘globally’ reliable, that is, its propensity to produce true beliefs that can be defined, to an acceptable approximation, as the proportion of the beliefs it produces, or would produce where it used as much as opportunity allows, that is true is sufficiently reasonable. We have advanced variations of this view for both knowledge and justified belief, its first formulation of a reliability account of knowing appeared in the notation from F.P.Ramsey, 1903-30. Inside, “The theory of probability,” he was to advocate that he was the first to show how a ‘personality theory’ could be progressively advanced from a lower or simpler to a higher or more complex form, as developing to come to have usually gradual acquirements, only based on a precise behaviourial notion of preference and expectation, in the philosophy of language, much of Ramsey’s work was directed at saving classical mathematics from ‘intuitionism’, or what he called the ‘Bolshevik harassments of Brouwer and Weyl. In the theory of probability he was the first to show how we could develop some personalists theory, based on precise behaviourial notation of preference and expectation. In the philosophy of language, Ramsey was one of the first thankers, which he combined with radical views of the function of many kinds of a proposition. Neither generalizations, nor causal propositions, nor those treating probability or ethics, describe facts, but each has a different specific function in our intellectual economy. Ramsey was one of the earliest commentators on the early work of Wittgenstein, and his continuing friendship that led to Wittgenstein’s return to Cambridge and to philosophy in 1929.

Ramsey’s sentence theory is the sentence generated by taking all the sentences affirmed in a scientific theory that use some term, e.g., ‘quark’. Replacing the term by a variable, and existentially quantifying into the result, instead of saying that quarks have such-and-such properties, the Ramsey sentence says that there is something that has those properties. If we repeat the process for all of a group of the theoretical terms, the sentence gives the ‘topic-neutral’ structure of the theory, but removes any implication that we know what the term so treated prove competent. It leaves open the possibility of identifying the theoretical item with whatever, but it is that best fits the description provided, virtually, all theories of knowledge. Of course, share an externalist component in requiring truth as a condition for known in. Reliabilism goes further, however, in trying to capture additional conditions for knowledge by ways of a nomic, counterfactual or similar ‘external’ relations between belief and truth, closely allied to the nomic sufficiency account of knowledge. The core of this approach is that χ’s belief that ‘p’ qualifies as knowledge just in case ‘χ’ believes ‘p’, because of reasons that would not obtain unless ‘p’s’ being true, or because of a process or method that would not yield belief in ‘p’ if ‘p’ were not true. An enemy example, ‘χ’ would not have its current reasons for believing there is a telephone before it. Or consigned to not come to believe this in the ways it does, thus, there is a counterfactual reliable guarantor of the belief’s bing true. Determined to and the facts of counterfactual approach say that ‘X’ knows that ‘p’ only if there is no ‘relevant alternative’ situation in which ‘p’ is false but ‘X’ would still believe that a proposition ‘p’; must be sufficient to eliminate all the alternatives to ‘p’ where an alternative to a proposition ‘p’ is a proposition incompatible with ‘p?’. That I, one’s justification or evidence for ‘p’ must be sufficient for one to know that every alternative to ‘p’ is false. This element of our evolving thinking, Sceptical arguments have exploited about which knowledge. These arguments call our attentions to alternatives that our evidence sustains itself with no elimination. The sceptic inquires to how we know that we are not seeing a cleverly disguised mule. While we do have some evidence against the likelihood of such as deception, intuitively knowing that we are not so deceived is not strong enough for ‘us’. By pointing out alternate but hidden points of nature, in that we cannot eliminate, and others with more general application, as dreams, hallucinations, etc. The sceptic appears to show that every alternative is seldom. If ever, satisfied.

All the same, and without a problem, is noted by the distinction between the ‘in itself’ and the; for itself’ originated in the Kantian logical and epistemological distinction between a thing as it is in itself, and that thing as an appearance, or as it is for us. For Kant, the thing in itself is the thing as it is intrinsically, that is, the character of the thing apart from any relations in which it happens to stand. The thing for which, or as an appearance, is the thing in so far as it stands in relation to our cognitive faculties and other objects. ‘Now a thing in itself cannot be known through mere relations: and we may therefore conclude that since outer sense gives us nothing but mere relations, this sense can contain in its representation only the relation of an object to the subject, and not the inner properties of the object in itself’. Kant applies this same distinction to the subject’s cognition of itself. Since the subject can know itself only in so far as it can intuit itself, and it can intuit itself only in terms of temporal relations, and thus as it is related to its own self, it represents itself ‘as it appears to itself, not as it is’. Thus, the distinction between what the subject is in itself and hat it is for itself arises in Kant in so far as the distinction between what an object is in itself and what it is for a knower is applied to the subject’s own knowledge of itself.

Hegel (1770-1831) begins the transition of the epistemological distinct ion between what the subject is in itself and what it is for itself into an ontological distinction. Since, for Hegel, what is, s it is in fact ir in itself, necessarily involves relation, the Kantian distinction must be transformed. Taking his cue from the fact that, even for Kant, what the subject is in fact ir in itself involves a relation to itself, or seif-consciousness. Hegel suggests that the cognition of an entity in terms of such relations or self-relations do not preclude knowledge of the thing itself. Rather, what an entity is intrinsically, or in itself, is best understood in terms of the potentiality of that thing to enter specific explicit relations with itself. And, just as for consciousness to be explicitly itself is for it to be for itself by being in relation to itself, i.e., to be explicitly self-conscious, for-itself of any entity is that entity in so far as it is actually related to itself. The distinction between the entity in itself and the entity for itself is thus taken to apply to every entity, and not only to the subject. For example, the seed of a plant is that plant in itself or implicitly, while the mature plant which involves actual relation among the plant’s various organs is the plant ‘for itself’. In Hegel, then, the in itself/for itself distinction becomes universalized, in is applied to all entities, and not merely to conscious entities. In addition, the distinction takes on an ontological dimension. While the seed and the mature plant are one and the same entity, being in itself of the plan, or the plant as potential adult, in that an ontologically distinct commonality is in for itself on the plant, or the actually existing mature organism. At the same time, the distinction retains an epistemological dimension in Hegel, although its import is quite different from that of the Kantian distinction. To know a thing, it is necessary to know both the actual explicit self-relations which mark the thing (the being for itself of the thing), and the inherent simpler principle of these relations, or the being in itself of the thing. Real knowledge, for Hegel, thus consists in a knowledge of the thing as it is in and for itself.

Sartre’s distinction between being in itself and being for itself, which is an entirely ontological distinction with minimal epistemological import, is descended from the Hegelian distinction. Sartre distinguishes between what it is for consciousness to be, i.e., being for itself, and the being of the transcendent being which is intended by consciousness, i.e., being in itself. What is it for consciousness to be, being for itself, is marked by self relation? Sartre posits a ‘Pre-reflective Cogito’, such that every consciousness of ‘χ’ necessarily involves a ‘non-positional’ consciousness of the consciousness of ‘χ’. While in Kant every subject is both in itself, i.e., as it is apart from its relations, and for itself in so far as it is related to itself, and for itself in so far as it is related to itself by appearing to itself, and in Hegel every entity can be considered as both ‘in itself’ and ‘for itself’, in Sartre, to be self-related or for itself is the distinctive ontological mark of consciousness, while to lack relations or to be in itself is the distinctive e ontological mark of non-conscious entities.

This conclusion conflicts with another strand in our thinking about knowledge, in that we know many things. Thus, there is a tension in our ordinary thinking about knowledge -. We believe that knowledge is, in the sense indicated, an absolute concept and yet, we also believe that there are many instances of that concept.

If one finds absoluteness to be too central a component of our concept of knowledge to be relinquished, one could argue from the absolute character of knowledge to a sceptic conclusion (Unger, 1975). Most philosophers, however, have taken the other course, choosing to respond to the conflict by giving up, perhaps reluctantly, the absolute criterion. This latter response holds as sacrosanct our commonsense belief that we know many things (Pollock, 1979 and Chisholm, 1977). Each approach is subject to the criticism that it preserves one aspect of our ordinary thinking about knowledge at the expense of denying another. We can view the theory of relevant alternatives as an attempt to provide a more satisfactory response to this tension in our thinking about knowledge. It attempts to characterize knowledge in a way that preserves both our belief that knowledge is an absolute concept and our belief that we have knowledge.

This approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution an evolutionary epistemologist claims that the development of human knowledge processed through some natural selection process, the best example of which is Darwin’s theory of biological natural selection. There is a widespread misconception that evolution proceeds according to some plan or direct, put it has neither, and the role of chance ensures that its future course will be unpredictable. Random variations in individual organisms create tiny differences in their Darwinian fitness. Some individuals have more offsprings than others, and the characteristics that increased their fitness thereby become more prevalent in future generations. Once upon a time, at least a mutation occurred in a human population in tropical Africa that changed the hemoglobin molecule in a way that provided resistance to malaria. This enormous advantage caused the new gene to spread, with the unfortunate consequence that sickle-cell anaemia came to exist.

When proximate and evolutionary explanations are carefully distinguished, many questions in biology make more sense. A proximate explanation describes a trait-its anatomy, physiology, and biochemistry, as well as its development from the genetic instructions provided by a bit of DNA in the fertilized egg to the adult individual. An evolutionary explanation is about why DNA specifies that trait in the first place and why has DNA that encodes for one kind of structure and not some other. Proximate and evolutionary explanations are not alternatives, but both are needed to understand every trait. A proximate explanation for the external ear would incorporate of its arteries and nerves, and how it develops from the embryo to the adult form. Even if we know this, however, we still need an evolutionary explanation of how its structure gives creatures with ears an advantage, why those that lack the structure shaped by selection to give the ear its current form. To take another example, a proximate explanation of taste buds describes their structure and chemistry, how they detect salt, sweet, sour, and bitter, and how they transform this information into impulses that travel via neurons to the brain. An evolutionary explanation of taste buds shows why they detect saltiness, acidity, sweetness and bitterness instead of other chemical characteristics, and how the capacities detect these characteristics help, and cope with life.

Chance can influence the outcome at each stage: First, in the creation of genetic mutation, second, in whether the bearer lives long enough to show its effects, thirdly, in chance events that influence the individual’s actual reproductive success, and fourth, in wether a gene even if favored in one generation, is, happenstance, eliminated in the next, and finally in the many unpredictable environmental changes that will undoubtedly occur in the history of any group of organisms. As Harvard biologist Stephen Jay Gould has so vividly expressed that process over again, the outcome would surely be different. Not only might there not be humans, there might not even be anything like mammals.

We will often emphasis the elegance of traits shaped by natural selection, but the common idea that nature creates perfection needs to be analyzed carefully. The extent to which evolution achieves perfection depends on exactly what you mean, if you mean ‘Does natural selection always takes the best path for the long-term welfare of a species?’ The answer is no. That would require adaption by group selection, and this is, unlikely. If you mean ‘Does natural selection creates every adaption that would be valuable?’ The answer again, is no. For instance, some kinds of South American monkeys can grasp branches with their tails. The trick would surely also be useful to some African species, but, simply because of bad luck, none have it. Some combination of circumstances started some ancestral South American monkeys using their tails in ways that ultimately led to an ability to grab onto branches, while no such development took place in Africa. Mere usefulness of a trait does not necessitate it mean that will evolve.

This is an approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge proceeds through some natural selection process, the best example of which is Darwin’s theory of biological natural selection. The three major components of the model of natural selection are variation selection and retention. According to Darwin’s theory of natural selection, variations are not pre-designed to perform certain functions. Rather, these variations that perform useful functions are selected. While those that suffice on doing nothing are not selected but, nevertheless, such selections are responsible for the appearance that specific variations built upon intentionally do really occur. In the modern theory of evolution, genetic mutations provide the blind variations (blind in the sense that variations are not influenced by the effects they would have,-the likelihood of a mutation is not correlated with the benefits or liabilities that mutation would confer on the organism). The environment provides the filter of selection, and reproduction provides the retention. It is achieved because those organisms with features that make them less adapted for survival do not survive about other organisms in the environment that have features that are better adapted. Evolutionary epistemology applies this blind variation and selective retention model to the growth of scientific knowledge and to human thought processes in general.

The parallel between biological evolution and conceptual or we can see ‘epistemic’ evolution as either literal or analogical. The literal version of evolutionary epistemologic biological evolution as the main cause of the growth of knowledge stemmed from this view, called the ‘evolution of cognitive mechanic programs’, by Bradie (1986) and the ‘Darwinian approach to epistemology’ by Ruse (1986), that growth of knowledge occurs through blind variation and selective retention because biological natural selection itself is the cause of epistemic variation and selection. The most plausible version of the literal view does not hold that all human beliefs are innate but rather than the mental mechanisms that guide the acquisition of non-innate beliefs are themselves innately and the result of biological natural selection. Ruses (1986) repossess to resume of the insistence of an interlingual rendition of literal evolutionary epistemology that he links to sociology.

Determining the value upon innate ideas can take the path to consider as these have been variously defined by philosophers either as ideas consciously present to the mind priori to sense experience (the non-dispositional sense), or as ideas which we have an innate disposition to form, though we need to be actually aware of them at a particular time, e.g., as babies-the dispositional sense. Understood in either way they were invoked to account for our recognition of certain verification, such as those of mathematics, or to justify certain moral and religious clams which were held to b capable of being know by introspection of our innate ideas. Examples of such supposed truths might include ‘murder is wrong’ or ‘God exists’.

One difficulty with the doctrine is that it is sometimes formulated as one about concepts or ideas which are held to be innate and at other times one about a source of propositional knowledge, in so far as concepts are taken to be innate the doctrine reflates primarily to claims about meaning: Our idea of God, for example, is taken as a source for the meaning of the word God. When innate ideas are understood prepositionally, their supposed innateness is taken an evidence for the truth. This latter thesis clearly rests on the assumption that innate propositions have an unimpeachable source, usually taken to be God, but then any appeal to innate ideas to justify the existence of God is circular. Despite such difficulties the doctrine of innate ideas had a long and influential history until the eighteenth century and the concept has in recent decades been revitalized through its employment in Noam Chomsky’s influential account of the mind’s linguistic capacities

No comments:

Post a Comment