pg0204 Nature of Mind

The Nature of Mind

 

Mind – “1. The human consciousness that originates in the brain and is manifested especially in thought, perception, emotion, will, memory, and imagination; 2. The collective conscious and unconscious processes in a sentient organism that direct and influence mental and physical behavior; 3. The principle of intelligence; the spirit of consciousness regarded as an aspect of reality. 4. The faculty of thinking, reasoning, and applying knowledge.”(1)

Many philosophers have maintained that there is something that they refer to as a “Mind” or “Soul”, or (to borrow from Freud) the “Id”, that is separate and distinct from the physical brain. They do this for many reasons, but essentially because, by merely thinking about thinking, they cannot detect anything of what biologists tell us is going on in the brain. They therefore jump to the conclusion that what the scientists tell us is happening cannot possibly be connected to what they feel is going on when they think. Especially when they compare the third-person objective descriptions provided by the scientists with the first-person subjective feel of what it is like as seen from “in here”.

Biologists describe the human brain as a bio-chemical computer, with a myriad of nerve cells each obeying relatively simple electro-chemical processing rules. (Some estimates suggest that the human brain contains approximately 1011 (one hundred billion) neurons, each having on average 7,000 synaptic connections to other neurons. Thus giving the brain about 7 x 1015 synapses). They go on to describe how networks of these neurons react when stimulated. But they do not describe how people think. Psychologists, on the other hand, provide detailed descriptions of how complex and subtle our thinking processes are. But they do not describe how that happens. The psychological and sociological sciences are full of detailed descriptions of strange, almost incomprehensible processes, and emphasize the unpredictability of the human mind. Many people listen to what the biologists and psychologists tell us, and cannot relate the physical bio-chemical descriptions of the biologists with the complex and unpredictable processes described by the psychologists. And they especially cannot relate those descriptions with what they themselves perceive is going on in their own minds. There appears to be a distinct qualitative difference between the things that the biologists are telling us, and the things that the psychologists are telling us about what goes on when we think.

The Explanatory Gap

The “explanatory gap” is the term introduced by philosopher Joseph Levine(2) for the gap between what the biologists are describing, and what the psychologists are describing. The gap is the alleged difficulty that a materialist-scientific theory of mind has in explaining how physical properties, materialist causal laws, and physical events (the descriptions offered by the biologists) can give rise to the subjective phenomenal aspects of the way things feel when they are consciously experienced (the descriptions offered by the psychologists).

As an aside, “materialism” is also called “physicalism”. But “physicalism” is often preferred because of its relation to “physical sciences”, and because the physical sciences often deal with the “immaterial” — concrete things like forces and fields, and abstract things like numbers and phase-spaces. While there are subtle differences between the two “isms” for the purists, for the purposes of this essay, I will treat the two as synonymous.

According to many, this explanatory gap (between the physical truths of the world, and the phenomenal truths of subjective experience) is real and meaningful. According to the proponents of the explanatory gap, whatever materialist account we might imagine of subjective conscious experience will leave it completely puzzling why there should be such a connection between the objective physical story and the subjective conscious experience(3). According to David Chalmers, for example, finding an acceptable materialist explanation for the subjectiveness of experience is “the hard problem” of Philosophy of Mind(4).

There is a weak understanding of the “gap”, and a strong understanding. The weak version of the explanatory gap is an epistemological concept — it regards the explanatory gap as resulting from a difference between two kinds of understanding, objective and subjective. A search for an objective understanding characterizes the physical sciences – physics, chemistry, biology, and so on. These look at things objectively, describing what their functions are, what they are composed of, how they operate. Physical science aims to uncover causal laws and behavioral regularities involving things and their parts. The aim is to achieve an understanding of phenomena “from the outside.” Subjective phenomena do not play a role in the physical sciences.

But subjective phenomenology does play a major role in the sciences of the mind. These sciences are concerned, certainly, with objective understanding. But they also make use of a different sort of subjective understanding — “from the inside.” There are aspects of reasons, purposes, feelings, thoughts, and experiences that can only be understood from within, via sympathy or empathy or other translation into one’s own experience(5). Broadly understood, subjective phenomena (intentions, desires, and so forth) play a pivotal role in sociology, economics, political theory, anthropology, history, and psychology. As well, of course, in our ordinary “folk” attributions of motivations and beliefs to other people.

The weak (epistemological) version of the explanatory gap labels the obvious fact that we do not yet have an acceptable physicalist (objective, third-party, from the outside) theory of how physical properties, materialist causal laws, and physical events can give rise to a first-person subjective experience of conscious phenomena. Think of a modern computer, by comparison. No one has a problem with agreeing that a computer’s behavior can be fully explained by its circuitry, and physical causal laws. Any attribution to the computer of reasons, purposes, feelings, thoughts, or experiences is readily and universally understood as anthropomorphic metaphor. There is no explanatory gap here. Not so when we do the same kind of attribution to other people, or even to ourselves.

Understood epistemologically, the explanatory gap is readily accepted by physicalists, and merely recognizes a gap in our understanding of how the material causes the subjective, or how the subjective supervenes on the physical. Even if we come to (or start from) the metaphysical conclusion that qualia and other subjective phenomena are physical, there still remains an epistemological explanatory gap. In other words, an epistemological “explanatory gap” does not cast any doubt on the truth of physicalism.

The strong version of the explanatory gap, on the other hand, is a metaphysical concept. Proponents of the strong understanding claim that the mind is substantially and qualitatively different from the brain and that the existence of something metaphysically extra-physical is required. It is thought by many mind-body dualists (e.g. Rene Descartes(6), David Chalmers(4), Colin McGinn(7)) that subjective conscious experience constitutes a separate effect that is outside the materialist world.

The current inability of materialists to supply a suitably intelligible explanation of subjective conscious experience in the same manner as the explanation we could offer for computer phenomena is considered by many anti-materialists to be evidence of the existence of a metaphysical explanatory gap. For example, Nagel(2) and Jackson(8) take the gap as evidence that objective physical explanations cannot account for the intrinsic quality of experience. Although Jackson(9) changes his mind and later joins the ranks of materialist philosophers who deny that there is such a metaphysical gap. Searle(10) argues that the gap between any imaginable functionalist account and the intrinsic intentionality of thoughts is evidence against a functionalist explanation of that intrinsic intentionality. Chalmers(11) takes the existence of the gap to be evidence against materialism in favor of some sort of dualism. And McGinn(7) sits on the fence, as it were. He suggests that the gap is due to inherent limitations on the powers of human understanding – we cannot comprehend whatever might be necessary to bridge the gap. Clearly not a materialist position, yet not obviously a dualist one either.

There is no general consensus regarding whether the metaphysical gap exists, or what metaphysical conclusions the existence of an epistemological gap supports. Those wishing to use its existence to support dualism (or at least to deny physicalism) take the position that an epistemic gap – particularly if it is a definite limit on our cognitive abilities – necessarily entails a metaphysical gap. Others, such as Joseph Levine(9), argue that no such metaphysical conclusion should be drawn.

Some anti-physicalists have appealed to conceivability arguments for support. The most famous are those of Jackson’s Mary, Chalmers’ Zombies, and the Inverted Spectrum argument. Such “knowledge arguments” as they are called do not obviously avoid begging the anti-materialist question. But they rely upon claims and intuitions that are controversial and not completely independent of one’s basic view about physicalism.

If one could establish on a priori grounds that there is no way in which subjective phenomena could be intelligibly explained in materialist terms, it would be acceptable to conclude that the epistemological gap is also a metaphysical gap. However, the very strength of such a claim makes it difficult to assume without begging the metaphysical conclusion in question. Thus those who wish to use the epistemological gap to refute metaphysical physicalism must find independent grounds to support it. In the end, we are right back where we started. The epistemological explanatory gap argument doesn’t demonstrate a metaphysical gap in nature — the fact that we don’t know how to explain the subjective phenomenology in physical terms does not entail that a physical naturalistic explanation is not possible. Of course a plausible explanation for there being a gap in our understanding of nature is that there is a genuine gap in nature. But so long as we have countervailing reasons for doubting the latter, we have to look elsewhere for an explanation of the former.

Mental Causes

To establish the context within which this issue of Mental Causes becomes important, I must say a few words introducing the approaches to the “Mind-Body Problem” that appear in the literature of the Philosophy of Mind. These approaches fall into three groups:

(i) there are “Dualist” approaches that argue that the Mind (or mental phenomena, properties, events, causes) are in some separate world from the Body (or physical phenomena, properties, events, causes). The most famous advocate of Dualism was Rene Descartes(12). No one these days argues for his “substance dualism”. But the modern version is “property dualism” — the theory that mental properties are completely separate, non-reducible to, and not understandable in terms of, whatever physical properties underlie them. One modern advocate of this theory is Colin McGinn(13).

(ii) there are “Idealist Monist” approaches that argue that all that exists is Mind, and that what we conceive of as the Body (or physical phenomena, properties, events) is but a figment, or echo, or construct of some Mind. The most famous advocate of this approach was George Berkeley(14). A modern version of this theory is the Pantheism argued for by the theologian Huw Owen(15).

(iii) and then there are the “Materialist/Physicalist Monist” approaches that argue that all that exists is the Body (or physical phenomena, properties, events, causes) and that what we conceive of as the Mind (or mental phenomena, properties, events, causes) is but a different view or description or conception of what is at base physical phenomena. Exact details of that relationship vary by philosopher. But despite that variation, Physicalism is the currently standard approach in the Philosophy of Mind(16).

One of the problems in interpreting the arguments offered by the different groups, is the debate over just what constitutes “physical”. Although the intuitive notion is that what constitutes the “physical” is only those things that are studied by the material sciences, as outlined by McDowell(17) and Stoljar(18), there is a great deal of debate over the details of just how to specify that notion. For the purposes of this essay, however, I’ll leave the notion at the intuitive level.

Another area of dispute is the exact meaning of the concept of a “cause”. As Terence Horgan explains, contra Davidson’s conception of causation,

“causal explanation is a highly context relative, highly interest relative affair, and attention to numerous examples . . . makes clear the implausibility of insisting that the only properties of a cause and effect that are relevant to causal explanation are ones which figure in a ‘strict law’.”(19)

As with the detailed specification of “physical”, there is significant debate over the concept of a “cause”. Ideas range from the “association of ideas” of David Hume(20) to the transfer of energy and conservation of key quantities of Messrs. Fair, Salmon, and Dowe(21). The disputes center on just how a “mental cause” ought to be conceived, and how it achieves its effects (if any). Causes supposedly have their effects in virtue of their properties. So are the mental properties of mental causes responsible for their physical effects? And if so, how? Again, for the purposes of this essay, I’ll leave the notion of a “mental cause” at an intuitive level.

A key premise adopted within most of the various physicalist approaches is the assumption of “Causal Closure of the Physical”. This premise states: “No physical event has a cause outside the physical domain.” – Jaegwon Kim(22). Now since physicalism assumes both that mental phenomena are (in some specified way) physical phenomena, and that causal closure stipulates that all physical phenomena have only physical causes, if mental causes are not physical as the dualists argue, then they would have no physical effects — in other word, they would be epiphenomenal.

Unfortunately, the argument suffers from a few serious defects. The first is that it relies on the premise of causal closure. Assuming causal closure entails that mental causes cannot have physical effects. Hence the argument commits the fallacy of begging the question. Obviously, if mental causes are not physical, they would be “non-physical” (i.e. “mental”). And, ex hypothesi, “mental” causes cannot cause physical effects.

Secondly, the argument is self-contradictory. If “mental” causes are admitted to truly cause physical phenomena, then they would not be epiphenomenal, but physicalism would be false. The only way to maintain the sense of the argument made by dualists is to conceive of “mental causes” as something separate from, over and above, or otherwise beyond, the physical. But that is either applying a dualist intuition to a physicalist base, contradicting the very notion of physicalism, or it is denying physicalism, in which case it does not follow that “mental” causes would be epiphenomenal.

A somewhat less dualist suggestion is a view of mental phenomena (and mental causes) as supervenient in some way on the physical. (There are several different versions of the notion of this “supervenient” relation that have been defined in the philosophical literature(23).) This allows the ontology to remain thoroughly physicalist, but provides for non-physical mental phenomena. Conceiving mental phenomena as different from, but supervening on physical phenomena, allows an opening for mental causes that do not cause any physical effects — hence epiphenomenal.

A number of philosophers have approached the problem from this perspective. For example, Tim Crane and Bill Brewer(24), Nick Zangwill(25), Douglas Ehring(26), and John Gibbons(27) have outlined various interpretations of a supervening relationship between mental phenomena (mental causation) and the underlying physical phenomena (physical causation) that makes it questionable whether “mental causes” should be considered efficacious or epiphenomenal. Jaegwon Kim(28) has described a concept of “supervenient causation” that supposedly allows a mental cause as a supervening entity, to cause both other mental phenomena as well as physical phenomena, by way of a special notion of “supervenient causation”. A notion he strictly separates from the “regular” notion of physical causation. But all these philosophers seem to view mental phenomena as different from physical phenomena (either as objects/entities, properties, or events), so they have trouble explaining just how mental causes can be effective — and not be epiphenomenal.

Other philosophers, adherents of the physicalist monist group such as Daniel Dennett(29), Stephen Yablo(30), and Crawford Elder(31) view the underlying “supervenience” relationship between the mental and the physical as being some sort of realizing or constitutive relationship — even though many of them still use the term “supervenient” to describe it. On this view of mental causes, they simply are physical causes — albeit differently described or conceived. Since, on this view, mental causes are physical in an unproblematic sense, there is no need to consider them as epiphenomenal. And no need for Kim’s “supervenient causation”.

There is no debate that, say, Quantum Mechanics and Relativity Theory supervene (in some fashion) on String Theory. And with the exception of scientific instrumentalists(32) there is no difficulty in granting full effective physical causality to the photon that mediates the repulsion of two converging electrons. Even though, in String Theory, all of this must be expressed in the equations of vibrating energy branes. There is, similarly, no debate that the electric charges of electrons and protons of the constituting atoms is fully effective in physically causing the complex folding of a protein molecule. Likewise, there is no debate that the interplay of complex chemical molecules constitutes the dynamic process that we call life (fans of elan vital(33) aside). That the heart attack caused the subject’s death is an unproblematic use of physical causation. All potential debate is adequately defused because we can imagine, and thus understand, the processes (or kinds of processes) involved — to some degree, if not fully.

The debate only arises when we try to understand how the complex dynamic processes of life can give rise to the first-person subjective self-referencing “feels-like” experience of consciousness. We have more than adequate grounds, from the rest of science, to accept a Physicalist position for every other area of experience except these first-person, “feels-like” phenomena. The only reason not to extend that Physicalist position to include mental phenomena (and mental causes) would be a lack of imagination about the possible processes involved.

Does Intentionality Admit of a Physicalistic Explanation?

The term “intentionality” refers to the phenomenon that mental states/attitudes/concepts are about, are directed on, or represent other things. There is something that I affirm, deny, interpret, understand, believe, hope, wish, like, love, hate, (and according to some, perceive or sense), and so forth. “Intentionality” is this rather vaguely characterized notion of “aboutness”. The modern use of the term was initiated by Brentano towards the end of the 19th Century(34). He claimed that every mental state has intentionality and is directed towards an “intentional object”. In modern philosophy of mind, is it hotly debated whether this notion of “aboutness” is a necessary and / or a sufficient condition for things mental. Some claim that only things mental can have this “aboutness.” Others claim that non-mental (i.e. physical) things can have the same sort of “aboutness”. The division between views matches the division between physicalism and anti-physicalism.

There are, of course, many purely physical things that do have an “aboutness”. Maps, photographs, sentences of a language, and so forth, are all about something. But the anti-physicalist draws a distinction between “intrinsic” versus “derivative” intentionality. Intrinsic intentionality is intentionality that is not derivative. An object has derivative intentionality when its “aboutness” relies on the interpretation of something outside that object. So a sentence of a language, for example, is about something only because something outside of it (a mind) provides that aboutness. The anti-physicalist argument is that only mental states have intrinsic intentionality. Philosophers arguing in this direction include Jerry Fodor, John Searle, Fred Dretske, Tyler Burge, Saul Kripke, among others.

In contrast to this view, consider the Martian Spirit rover. It has a panel of photocells fixed to its back. To angle its panel of cells to maximize power acquisition, the drivers sitting at JPL in Pasadena had to drive the rover up the side of a hill. Suppose JPL builds a Spirit2 rover that has two advances. One is a panel of solar cells that the rover can tilt and swivel on its own — it has a program module specifically programed to do so when provided with a position to point to. And the second is a skycam (with its associated analysis software module) that tells that solar panel program module where the sun (or at least the brightest spot in the camera view) is located. Now Spirit2 can maximize its power acquisition without the involvement of the drivers at JPL. The message from the skycam analysis program module to the solar panel manager program module is clearly “about” the power maximization coordinates. The intentionality of this “aboutness” does not rely on the interpretation of something outside of it. The drivers at JPL need have no inkling about the messages passed between the two program modules. If this counts as “intrinsic” intentionality, then the physicalist case is made.

But the general reply of the anti-physicalist is that the entire Spirit2 rover is a device designed by a mind, and hence derives any intentionality of its parts from the intrinsic intentionality of the designing mind. But this response runs into the counter argument, offered by the likes of Ruth Millikan and Daniel Dummett, among others. We human beings are but a device designed by “Mother Nature” — otherwise known as the Blind Watchmaker(35), or the processes of evolution. According to Richard Dawkins, we (including our minds) are the result of the purely physical processes of evolutionary selection, “designed” for the purposes of ensuring the survival and proliferation of our genes. And no one suggests that “Mother Nature” (in the guise of evolution or our genes) has a mind, or intrinsic intentionality. If we are the results of a purely physical process of evolution, and it is admitted that we have intentionality (intrinsic or derivative), then the physicalist case is made.

If this case is accepted (and a Designing Intelligence is not smuggled in), then where is the principled difference between us with our “intrinsic” intentionality, and the Spirit2 rover and its “derived” intentionality? If we (our minds), are an artifact designed by the physical processes of evolution, and do have intentionality, then intentionality obviously admits of a physicalistic explanation. If intentionality does not admit of a physicalistic explanation, then we (at least our minds) are not a result of the physical processes of evolution. The anti-physicalist must provide something extra. Either way, as I said earlier, the answer one provides will depend on one’s prior commitment to or opposition to physicalism.

Token Identity versus Type Identity

If we adopt, if only temporarily for sake of argument, a physicalist position, there is a physical or naturalistic explanation that can be provided for the subjective phenomenology and intentionality of the mental. So the question now arises as to just what kind of explanation that might be.

To reiterate, in contemporary philosophy of mind, “physicalism” is the doctrine that all that exists, is just the physical — the purview of the physical sciences(36). Minds and conscious experience are based on, and can be explained by, physical properties. Physicalism maintains that mental phenomena — the “how it feels”, the qualia, propositional attitudes, and all the conscious experiences — are the products of physical properties of physical existents.

The type-identity theory of mind-brain physicalism (also known as reductive materialism) asserts that mental phenomena (properties, events, processes, states, etc.) can be grouped into types (or kinds) and that these can be identified with types (or kinds) of physical phenomena in the brain.(37) Type-identity theories maintain that any particular kind of mental phenomena is a particular kind of physical phenomena. Where the “is” here is to be understood as strict logical identity. More formally:

For every mental event/property/state M, there exists a physical event/property/state P such for all events/properties/states X, X instantiates M if and only if X instantiates P.

The classic example in the literature is the supposed identity between the mental type of phenomena labelled “pain” with the physical type of brain phenomena labelled “c-fiber firing”. According to the type-identity theory, other mental phenomena will turn out, on investigation, to be nothing more than particular kinds of neuron firing patterns in the brain.

The type-identity theory was developed by U.T. Place, Herbert Feigl, J.J.C. Smart, and D.M. Armstrong in the 1950’s and 1960’s in response to problems perceived with the Behaviorist approach to understanding the mental(38). The classic development of the mind-brain type-identity theory was by D.M. Armstrong in his 1968 book A Materialist of the Mind.

As noted by J.J.C. Smart, one of the incentives driving the development of the type-identity theories was an appeal to Occam’s Razor:

“There does seem to be, so far as science is concerned, nothing in the world but increasingly complex arrangements of physical constituents. . . . That everything be explicable in terms of physics . . . except the occurrence of sensations seems to be frankly unbelievable.”(39)

Another great advantage of a type-identity theory is that it solves Descartes problem (of mind-brain interaction) by reducing the mental to the physical. It provides a simple way to explain the causal efficacy of the mental. If the mental is the physical, then mental causes just are physical causes. Thus it eliminates the problem of reconciling mental causation with the notion of the causal closure of the physical. And it also allows empirical investigation of mental phenomena through the investigation of the physical substrate.

However, the multiple realizability argument pulled the rug out from underneath type-identity theories. The concept of multiple realizability was introduced by Hilary Putnam(40) and Jerry Fodor(41) in the late 1960’s as an objection to the mind-body type-identity theories, and as a consequence of their development of the Functionalist alternative. According to the Multiple Realizability argument,

  • The type-identity theorist posits for every kind of mental phenomena there is a unique kind of physical phenomena of the brain such that a thing (life-form or otherwise) can exhibit that mental phenomena if and only if it realizes that physical phenomena. 
  • It seems quite plausible to maintain as an empirical hypothesis, that physically possible things (life-forms or otherwise) can exhibit the same mental phenomena without having brains that realize the same unique physical phenomena.
  • Therefore, it is highly unlikely that the Mind-Brain Type Identity theorist is correct.

If “pains” are identical to “c-fiber firings” in human brains, then an octopus, for example could not feel pain because it does not have c-fibers. And a robot, android, or a Martian could not exhibit true consciousness, or have real experiences. It also denies the possibility that “pain” (or some other mental phenomenon, like seeing “red” or believing that it is raining) may not be implemented in me the same way that it is implemented in you.

This conclusion does not seem plausible to most people. It is considered more plausible that mental phenomena can be multiply realized in different kinds of physical entities. Thus the conclusion is that a mind-body type-identity theory is too strong to explicate physicalism because it limits the existence of mental phenomena to brains just like ours. It denies the possibility that other sorts of physical organizations might experience mental phenomena.

In contrast with type-identity physicalism, token-identity physicalism argues that mental phenomena are unlikely to have “steady” physical correlates. In other words, token-identity physicalism argues that while each particular mental phenomenon is identical with some particular physical phenomenon (maintaining the “is” of mind-brain identity), there is no prima facie necessity that any type of mental phenomena has the same type of physical correlate. Token-identity physicalism thus accommodates the multiple realizability intuition.

It is important to realize that token-identity physicalism is strictly logically weaker than type-identity physicalism.(42) Token is to type as member is to set. We can share the same type of haircut, but not the same token haircut. So if type-identity physicalism is true, then token-identity physicalism is also true. But one can consistently deny type-identity and affirm token-identity.

Type-identity theories claim that the identity of particular mental and physical tokens (phenomena, properties, processes, events, states, etc.) depends upon the discovery of lawlike relations between the respective mental and physical types. Token identity claims thus depend upon type-identity in these theories. Empirical evidence for the required type-identity laws is held to be necessary for particular token identity claims. It is necessary that empirical evidence support the claim that “pains” are “c-fiber firing” to support the claim that “this particular pain” is identical to “that particular c-fiber firing”. But to accommodate the multiple realizability intuition, token-identity theories must specifically deny the kind of lawlike relations posited by the type-identity theories. The most well known of these token-identity theories is the Anomalous Monism of Donald Davidson.(43) Davidson’s theory requires no empirical evidence (it is based on a priori arguments) and depends on there being no lawlike relations between mental and physical types.

Anomalous Monism starts by assuming that some mental phenomena have causal interactions with (cause or are caused by) physical phenomena — like the physical phenomenon of it raining causing a mental phenomenon of my belief that it is raining. Added to this first premise is a premise to the effect that all cause-effect relations are covered by strict laws (not just ceterus paribus generalities). Davidson then adds the key premise that defines Anomalous Monism — a premise that there are no strict laws governing cause and effect between mental types of phenomena. Therefore, every causally interacting mental phenomenon must be token-identical to some physical phenomenon. This separates his Anomalous Monism from the problems outlined above adhering to type-identity theories, and makes it a token-identity theory.

But token identity theories are criticised as being far too weak to be a sound basis on which to found a physicalist view of the mind-body problem — it is too weak to preserve a physicalist position. Token-identity physicalism can be true even if there is nothing remotely resembling a systematic relationship between the mental and the physical. A systematic relationship between mind and body is fundamental to a robust physicalist position. Here is Jaegwon Kim’s analysis of the problem:

“. . .token physicalism is a weak doctrine that doesn’t say much; essentially, it only says that mental and physical properties are instantiated by the same entities. Any event or occurrence with a mental property has some physical property or other. But the theory says nothing about the relationship between mental properties and physical properties . . . Token physicalism can be true even if there is nothing remotely resembling a systematic relationship between the mental and the physical. . . . As far as token physicalism goes, there could be another world just like [ours] in every physical detail except that mentality and consciousness are totally absent. Token physicalism, therefore, can be true even if mind-body supervenience fails: What mental features a given event has is entirely unconstrained by what biological/physical properties it has, as far as token physicalism goes, and there could be a molecule-for-molecule physical duplicate of you who is wholly lacking in consciousness, that is, a zombie. This means that the theory says nothing about how mental properties of an event might be physically based or explained. Token physicalism, then, is not much of a physicalism. In fact, if we accept mind-body supervenience as defining minimal physicalism, token physicalism falls outside of the scope of physicalism altogether.”(44)

This conclusion does not seem plausible to most people. To most physicalists, it is considered more plausible that zombies are logically incoherent, and that the mental supervenes (in some fashion) on the physical. Davidson, for example, explicitly addresses this problem by including within his Anomalous Monism a premise that the mental does in fact supervene on the physical.

But then there is this famous quote from Tim Crane:

“[I]t seems nomologically possible that many very different token physical entities could all be in the same type of mental state. So the type-identity theory is far too strong to be empirically plausible. But the token identity theory, on the other hand, seems too weak to be satisfactory — for what explains why these mental tokens are identical with these physical tokens? A solution to the mind-body problem is supposed to give an illuminating answer to the question of the relation between the mental and the physical. But it is hard to see how the token identity theory can do this.”(45)

Thus the conclusion is that a mind-body token-identity theory without additional premises is too weak to explicate physicalism because it permits no consistent explanatory relationship between the mental and the physical. Token-identity physicalism by itself cries out for a companion theory to fill these holes. The alternative is Functionalism to which we now turn.

Functionalism as a Theory of the Mental

The best way to characterize the Functionalist approach to the Mental, is to contrast it with its historical predecessor Behaviorism. One of the reasons for this contrasting approach is that the proponents of anti-physicalism regard the physicalist appreciation of the mental as if it were equivalent to Behaviorism.

Broadly speaking, Behaviorism is a psychological doctrine that demands behavioral evidence for any psychological theory(46). Clearly a “scientific” cum objective, third-person approach to things mental. And one of the primary reasons why the anti-physicalists tend to conflate physicalism with behaviorism.

In 1963, Sellars commented that someone qualifies as a behaviorist if they maintain “hypotheses about psychological events in terms of behavioral criteria”(47). Such a doctrine holds that there is no knowable difference between states of mind unless there is a demonstrable difference in the behavior associated with each state.

The historical basis of behaviorism is the philosophical movement of Logical Positivism(48) dominant in the early third of the 20th Century. Logical positivism argued that the meaning of statements used in science should be understood in empirical terms — in terms of observations that verify their truth(49). Behaviorism adapted this philosophy to claim that mental concepts must refer to empirically observable behavioral tendencies, and so can (and should) be translated into behavioral terms.

Over the years, however, behaviorism has split into three rough families of argument. “Methodological” behaviorism maintains that psychology is the science of behavior, not the science of mind, and thus does not even pretend to offer a “theory of the mental”. “Psychological” behaviorism (also known as “radical” behaviorism(50)) maintains that the causes and explanation of behavior are to be found in the external environment, and not in internal mental phenomenon. Behavior thus can be understood without reference to mental events or to mental processes. As a “theory of the mental,” therefore, it treats the mind as a unit black box, knowable only via its externally observable behavior. Such mental concepts as are entertained are treated as “theoretical fictions”. (A “theoretical fiction” is a computational or imagery device proposed by a theory with no pretense that the device is real. An instrumentalist in the philosophy of science, for example, will regard the notions of electrons and quarks as theoretical fictions.) “Analytical” behaviorism (also known as “philosophical” or “logical” behaviorism), on the other hand, maintains that all mental concepts can be translated into behavioral concepts, and that terms for mental concepts can and should be replaced by terms for behavioral concepts. As a “theory of the mental”, therefore, it also treats the mind as a unit black box, but does not permit mental concepts even as “theoretical fictions”. Analytic Behaviorism was the “orthodox” theory of the mental from about the 1930’s to the dawn of computational cognitive science in the 1950’s(51).

Analytic Behaviorism, as a theory of the mental, has no place for a representation of the environment as a determinant of behavior. To many critics of behaviorism, the fact that the environment and one’s learning history is represented internally by the subject seems to be more of a behavioral determinant than the subject’s reinforcement history. How a subject sees (classifies, interprets, represents) a stimulus appears more germane to the behavior elicited, than just the bare stimulus history. Critics sometimes argue this point through the concept of “qualia”. Some experiences, it is argued, have characteristic “qualia” or presentationally immediate phenomenal qualities of experience. Being in pain, for example, does not just involve the appropriate behavioral dispositions, it also involves (so the argument goes) a particular sort of “what it is like”(52) to experience pain. Behaviorism must deny the reality of qualia, and the possibility of internal representations that can modify behavioral dispositions. A philosophical zombie(53) is a hypothetical being that is indistinguishable from a normal human being except in that it lacks conscious experience, or qualia. To analytic behaviorism, since it admits of no place for internal representations and denies the existence of quails, we are all philosophical zombies. This seems to be wrong.

If mental states are identical with behavioural dispositions, as behaviourism holds, then any two people with the same behavioural dispositions will be in the same mental states. There do seem, however, to be circumstances in where two different mental states can be associated with identical behavioural dispositions, or the same mental state with two different behavioral dispositions. Behaviourism cannot account for this. Behaviorism appears unable to distinguish between real mental states and pretend mental states, since both arguably involve precisely the same behavioural dispositions. If the mental state of being in pain is defined in terms of the behavior we would normally associate with such a state, then how to characterize a person who exhibits that behavior without being in pain? Behaviorism would seem to deny this possibility. So an actor who displays all the associated behavior of pain, would be considered to actually be in pain. Pretense is impossible. The reverse scenario involves Super-Spartans(54) – hypothetical beings with mental states identical to our own, but who lack normal behavioural dispositions. Stab a Super-Spartan, and though he will feel pain, he will have no disposition to exhibit normal pain behavior. Behaviourism must say that Super-Spartans are also impossible. This seems to be just as wrong.

Behaviorism views the mind as a unit black box, treating only the inputs (stimuli) and outputs (behaviors), and regarding mental states as operators translating inputs into outputs. Functionalism, on the other hand, builds a theory of the internal workings of the black box, positing functional roles (mental states) that mediate the translation of inputs (stimuli) to outputs (behavior)(55). Functionalism, as a philosophy of mind, is the doctrine that mental states are defined by the functional roles they play in the complex economy of causal interactions within the network of roles of which they are a part. Behaviorism takes the identity of a mental state to be determined by its relations to the organism’s inputs (stimuli) and outputs (behavior). Functionalism takes the identity of a mental state to be determined by the organism’s inputs (stimuli) and outputs (behavior), but also by its causal relations to other mental states.

Thus functionalism solves one of the problems that plagues behaviorism. For behaviorism, a belief that it is raining has to be defined in terms of behavioral dispositions to stay out of the rain or carry an umbrella, and so forth. But this approach runs into the difficulty that my disposition to stay out of the rain will not manifest itself if I also have the desire to go “singing in the rain”. Behaviorally defined mental states thus seem to require an infinite regress of other mental states. Critics argue that the holism that this regress requires can never be satisfied — that one cannot get away from mental states being a part of the determinants of behavior. Functionalism, however, embraces this holism, defining any one mental state by the role that it plays in the larger functional network.

Functionalism, as a theory of the mind in contemporary philosophy, developed largely as an alternative to both the type-identity theories of mind and behaviourism.(56) Functionalism is a theoretical level between the physical implementation and behavioral output.(57) Functionalism thus plays its role at Dennett’s “Design Level”.(58) Functional roles are defined independently of any physical structure realizing the role. Thus functionalism solves the multiple realizability problem plaguing the type-identity theories. Functional roles are also defined independently of the higher level of Dennett’s “Intentional Level”(58). Thus functionalism seeks to explain and understand intentional concepts by positing a sub-intentional network of roles that realize the intentional concepts.

Over the years functionalism, like behaviorism, has divided into a number of different varieties. “Machine-state Functionalism” was first proposed by Hilary Putnam in the 1960’s.(59) It was inspired by the work of the early computer scientists — most notably Alan Turing.(60) Any system that possesses a mental life is simply a complex Turing Machine instantiating a certain machine table. Each mental state (a thought, e.g.) is actually a machine state that arises in the course of that program. “Psycho-functionalism” is most notably associated with Jerry Fodor.(61) It views psychology as employing the same sorts of irreducibly teleological or purposive explanations as the biological sciences. Psychological states have the same kind of functional and teleological roles as do biological organs. Psycho-functionalism can be characterized as “a posteriori” functionalism or “scientific” functionalism. We will learn that functionalism is true only as a result of empirical investigation into the workings of the mind. “Analytic Functionalism” is most closely associated with David Lewis.(62) It, like analytic behaviorism, is mostly concerned with the meaning of mentalistic terms. Mental terms are defined by the theories of the mental in which they occur, and not by any extrinsic features of the world. Such functional definitions, therefore, are claimed to be analytic or a priori, rather than empirical. The central idea of analytical functionalism is that the essential functional roles of the mental are to be discovered by analysis of common sense knowledge of mental phenomena – folk theories of the mental. Daniel Dennett(63) and William Lycan(64) are more recent advocates of this branch of functionalism.

The Analytic Functionalism view of “qualia” and the presentationally immediate phenomenal qualities of experience is that they have acceptably defined functional natures. Unlike behaviorism, functionalism does not have to deny the reality of qualia (although some functionalists do(65)). How a subject sees (classifies, interprets, represents) a stimulus simply is a particular functional role. The phenomenal character of, say, pain or seeing red or remembering where the tiger was last reported, is one and the same as (is type-identical to) the functional role that plays an essential part in mediating between physical inputs (body damage, surface reflectance properties, verbal reports from others) and physical outputs (yelling “Ouch!”, picking the ripe apple, going left rather than right).(66) All the other mental states that might be involved (for a Super-Spartan say, a desire not to let you know I am in pain) are just other functional roles connected in suitably causal ways in the holistic network that is the conscious mind.

There are two famous objections to functionalist theories of qualia: the Inverted Spectrum Argument and the Absent Qualia Hypothesis. The inverted spectrum argument is based on the apparent possibility of two people sharing their color vocabulary and discriminations, although the colors one sees (their qualia) are systematically different from the colors the other person sees.(67) Necessary to this argument is the stipulation that there is no discernible difference in the behavioral dispositions of the two subjects. Thus, the argument proceeds, the two are in different phenomenal states with the same behavioral dispositions. Hence, the qualia of sensory experience cannot be captured in functional roles. But functionalists can argue in response that this scenario is either not conceptually possible or not metaphysically possible.(68) It is open to the functionalist to reply that there would necessarily have to be some salient fine-grained functional differences between the two subjects, notwithstanding any admitted larger-scale functional identity between the two. Seeing red, or green, just is a particular functional role. If the stipulation is that two subjects differ in what they see, then necessarily they are exercising different functional roles.

The absent qualia hypothesis is the hypothesis that it is possible for there to be functional duplicates of conscious subjects that entirely lack qualia. Two examples of this hypothesis dominate the literature. There is the Zombie scenario(69) analyzed in detail by Chalmers(70). And there is the China Brain suggested by Block(71). The zombie scenario can be dismissed by the functionalist as both logically incoherent and metaphysically impossible. The China Brain scenario can, however, be accepted by the functionalist. Contrary to block’s argument, and however strange it might seem, the China Brain does experience qualia and have beliefs. That supposedly counter-intuitive response is explained by the disparity in size — the individual Chinese citizen participating in the experiment need have no more awareness of the consciousness and qualia of experience of the China Brain than does any individual neuron in the human brain(72).

In other words, functionalism, unlike behaviorism, does provide a better explanation for the subjective experience of the mental. In the view of modern functionalism, our minds are an anarchic network of interacting functions. There is no Official Understander to give functional roles meaning, to make decisions, and to understand the results. Consciousness, and the feel of subjective experience, is the result of the interaction of all those differing functions. Although it may not be the best possible theory (there are lots of criticisms of functionalism in the literature that I have not touched on), it is clearly better than behaviorism as a theory of the mental.

Evolutionary Pragmatism

The philosophy of Evolutionary Pragmatism does not jump to the conclusion that there is something there beyond the physically visible. Adding something extra would require the assumption of something that is not part of the perceptible Reality. It would be adding something for which there is no clear evidence. Ignorance of the connection between the biology and the psychology is not an excuse to assume there is no connection. Evolutionary Pragmatism invokes Ockham’s Razor to reject the added complication. For Evolutionary Pragmatism, the “mind”, “Soul”, “id”, or whatever, is but a functional manifestation of the physical brain in operation. Evolutionary Pragmatism, therefore, adopts an overtly Physicalist-Monist attitude towards the nature of the Mind. Moreover, it adopts Analytical Functionalism as the best approach to closing the epistemological explanatory gap.

You cannot expect to comprehend the complex behaviour of a modern super-computer by examining the physics of the electron moving through a semi-conductor. You can gain, thereby, a good understanding of how the supercomputer works at the level of the electron, but you could never gain a decent comprehension of how the whole computer works as a single entity. You can’t even gain that insight by examining the programming of the computer at the language syntax level. Understanding how the programmer writes programs, and how the computer interprets and executes those programs would still not provide you with the insight into how the entire box functions as a whole. To understand how the entire box functions, you have to “go up” an additional level or two, and view the computer as a processor that executes programs. At that level, you can begin to see how the various programs interact, and how that interaction determines the behaviour of the entire system. At this “higher” level you no longer require any specific knowledge about what the programs are actually doing, the language syntax of programming languages, or the physics of electrons in semi-conductors. This is the level addressed by analytic functionalism.

You can regard these functional levels as forming a simple hierarchy extending from the black-box of the entire computer at the “Top” to the flows of electrons through semiconductors at the “Bottom”. In any hierarchy, of any kind, you cannot gain a full understanding of the functioning of any particular node in the hierarchy, by studying the detailed functioning of nodes far down in the branches below it. You can only gain this understanding by studying the interaction between the particular node you are interested in and its peer nodes (other nodes at the same level) and by studying the interaction between its child nodes on the immediately lower level. None of the other nodes in the hierarchy will contribute much of significance to understanding the behaviour of the node you are interested in. (See notes on the Theory of Hierarchies.)

What the biologists are doing, is studying the nodes at the very bottom of the hierarchy called a human brain. And what the psychologists are doing is studying the nodes in the top couple of layers. And you, of course, when you are thinking about thinking, are studying only the top-most layer of the hierarchy. There are many layers in between the two domains of study. So of course, (except in rare instances) you will not be able to understand the behaviour of the psychologist’s level by learning about the behaviour of the biologist’s nodes. When studying the nature and behaviour of the human mind as an operational product of the physical brain, we can safely ignore almost all of what the biologists have been telling us about that brain. We need only concentrate on what the psychologists have been telling us.

But this separation in fields of study does not imply, and especially does not prove, that the “Mind” is not a physical, perceptible, and testable part of reality. What we refer to as “Mind” is the operational result of a functioning brain. Just as the sound of Beethoven’s Fifth Symphony is the operational result of a functioning orchestra. To understand the beauty and majesty of such music, you do not investigate the physics of a violin string, You study the interplay of sounds over time. Examining the physics of the violin string will give you some knowledge of how the orchestra produces sound, and some knowledge of how various sounds interact. But it will give you no information that is useful in understanding the majesty of a magnificent symphony in progress. The same with the human mind. Do not concern yourself with the biology of the nerve cell unless you are interested in questions about how nerve cells work, and cause other cells to do things. If you are interested in studying the “Mind” however, study instead the majesty of human behaviour, and human thought processes.

The brain is a physical thing that we can touch and feel and play with. We can take it apart and examine its parts. We can put it under various forms of microscope and watch it at work. You can do the same sort of thing when examining the physics of a violin string. But we cannot do the same sort of things to the sound of Beethoven’s Fifth Symphony, or the mind. The mind, being the product of a process in operation, is not something we can touch, or take apart, or experiment with its operations. The mind is the music that is generated when the orchestra of the brain is at work. When the brain does not function, there is no “Mind”. When the orchestra is not playing, there is no music.

The idea of the mind being separate from the body seems to be so natural. We speak of being “in” our bodies, as if we were tenants in some edifice. We talk of observing how our bodies are doing, of our bodies letting us down, as if “we” were separate but inseparable residers in a shell that is the body. And when we have a pain in the finger, the experience of the pain is in the finger not in the brain. We can see that the finger is physically injured. A doctor can examine the damage. But the doctor cannot experience the pain. The pain is unique to us, and somehow separate from the finger that is injured, even though that is where the pain is located.

But if the mind and the body are separate, what is the mind? What is left in the body when there is no mind? What must be added to a body to create a person? Scientists have done a pretty good job at examining the body. We have a pretty good understanding of the biochemistry of the body, including the workings of the neurons of the brain. So let’s consider a body lying in a bed. It is breathing through a respirator, and being fed through a tube. But it is not conscious. Let’s consider the possibility that it has no mind. Let’s call this mind-less body a “Zombie”. Can this zombie do things in the absence of the mind? Its heart is beating. And a rap below the kneecap will produce a lower leg jerk. But what is missing? If you ask a scientist, he will poke and prod, and scan and examine, and he will find nothing missing. Everything that is normally there in a clearly non-zombie individual, is also there in our zombie. So what is missing?

And if we now add the mind to our zombie, but leave the person asleep, what have we added? We have added something that the scientist cannot detect, or measure, or examine. The physical matter that was there before is still there in exactly the same quantities. The scientist can detect nothing additional. And how does the mind affect the body? Suppose our sleeping friend awakes and sits up. A scientist can trace the nerves from the stomach muscles back into the deeper reaches of the brain. At what point does the mind step in and generate its influences? To the scientist, the brain is a complex weaving of neural impulses. Given the proper measuring equipment, the scientist would be able to trace the nerve impulses from neuron to neuron. At what point does the mind step in? Can the mind influence any of the neural circuits without the scientist detecting it? If it can, how does it do so? If it cannot, how is it that the scientist cannot detect the influence?

The simple and inescapable fact is that there is no evidence whatever for any separate existence for the mind. Feelings, both of the pain in the finger and the enjoyment of a pecan tart, may be private and non-communicable. But they are never-the-less states of matter. They may be highly complex circuits of neural impulses in process, but they are measurable, physical configurations of a physical Reality. So Evolutionary Pragmatism chooses not to complicate matters by assuming the existence of something for which there is no evidence. Mind and body are one, and measurable parts of reality. Mind is the consequence of the body (sub-part brain) in action.

Despite all our studies of the biochemistry of the individual brain cell, we gain little useful information on the behaviour of the larger organism of which it is a part. Indeed, when we wish to speak about the behaviour of a multi-celled organs or organisms, we must employ different concepts, and a different language. We cannot attempt to understand how the community of cells behaves by studying the behaviour of the individual cells. Although the rules that do determine the community’s behaviour are inherent in the makeup of each individual cell, the number of possible interactions that must be considered is too incredibly complex to be managed at the cellular level of detail. Instead, we abstract the incredible detail involved, and deal with the behaviour of larger accumulations of cells in the community. Thus we can gain an understanding of breathing behaviours, by studying the behaviours of the cells in the circulatory system. And we can gain an understanding of the behaviour of eating, by understanding the behaviour of the cells in the digestive tract. And so forth. But this still does not shed much light on the overall behaviours of the entire community. To gain that level of understanding, we must deal at the level in the hierarchy of the individual organism, and understand the behaviour of that level of the hierarchy.

An excellent example of the consequences of this kind of hierarchical approach to behavioural analysis, comes from the relatively new field of computational life simulation (A-Life). Researchers in this new field are programming computers to simulate certain aspects of life. Their approach is to define simple computational “organisms” that have very simple rules of behaviour and are allowed to change their rules of behaviour in well understood ways, over a number of “generations”. When these simulations run, they mimic the processes of evolution, in that certain “fitness” rules determine which organisms of any generation are permitted to “breed” slightly altered off-spring. The fascinating outcome of these simulations is that the organisms begin to display unexpected behaviours. The most famous of these simulations is the Game of Life, invented by mathematician John Conway(73). This simulation takes place on an infinite plane of grid squares. The rules for each square are deceptively simple. The resulting behaviour of the entire plane is fascinatingly complex. The Game of Life has hooked many a student of computational simulations, and is so simple at the level of the individual square or “cell”, that it has been programmed on micro-computers for many years. And there is even a newsletter published for the many researchers and everyday folk who are interested in the emergent behaviour that these simple rules generate.

Starting with well understood and simple rules of behaviour, and well understood and simple rules for “learning”, the outcome is unpredictable by the experimenter. Such behaviours that emerge from the interplay of simple rules, is called “emergent” behaviour. Numerous examples of such behaviour are also being discovered in the study of chaos, and dynamical systems. It appears that the interplay of a large enough number of individually simple structures, becomes very quickly so complex that the consequences are unpredictable. The behaviour of the myriad of individual nerve cells in the brain results in the mind. It is but one example of such “emergent” behaviour. You start with a large number of individually relatively simple nerve cells that is the brain, and you end up with the unexpected and unpredictable behaviour that is the mind.

I cannot do proper justice here to the full explanation of Mind as emergent behaviour of the brain. A very much superior exposition of this approach to Mind is presented by Daniel C. Dennett in his treatise “Consciousness Explained”(74). I highly recommend this very readable text to anyone who wishes to explore this subject further, or has any reservations about this materialist view of the Human Mind.

And thus it is that the complex concept that is the Human Mind, with all of its confusing and incredible diversity of behaviour, can be regarded as the natural product of the bio-chemistry of the nerve cells that are its parts. There is no mystery here, merely a lack of comprehension, and an ignorance of details. What the psychologists tell us about the majesty and uniqueness of human thought can be easily melded with the physical chemistry detailed by the bio-chemist. What is between these two extremes of the hierarchy, is an as yet unknown series of levels of complexity and of function. The unpredictability of human behaviour emerges naturally from the simplicities of chemistry – just as the majesty of Beethoven’s Fifth Symphony emerges naturally from the simplicities of successive density waves of air. We are gaining more and more proof on a daily basis that we are not different in kind, merely in degree. All that remains is for us to discover the rules, to leverage the non-linearity’s, and to glory at the possibilities.

 

[Home] [Next]


 

Notes & References

(1) Unless otherwise specified, all dictionary definitions are quoted from The American Heritage® Dictionary of the English Language, Third Edition copyright 1992 by Houghton Mifflin Company.

(2) Levine, Joseph; “Materialism and Qualia: the Explanatory Gap” in Pacific Philosophical Quarterly, Vol 64 (1983). Pp 354-361.

(3) Nagel, Thomas; “What Is It Like to be a Bat?” in Philosophical Review, Vol 83 (1974). Pp 435-450.

(4) Chalmers, David; “Facing Up to the Problem of Consciousness” in Journal of Consciousness Studies, Vol 2, No 3 (1995). Pp 200-19.

(5) Harman, Gilbert; “Explaining an Explanatory Gap,” in American Philosophical Association Newsletter on Philosophy and Computers, Vol 6, No 2 (2007). Pp 2-3

(6) Descartes, Rene; Meditations on First Philosophy (3rd Ed). Translated by Donald A. Cress, Hackett Publishing Company, Indianapolis, Indiana. 1993. ISBN 0-87220-192-9.

(7) McGinn, Colin; “Can we solve the mind-body problem?” in Mind, Vol 98 (1989). Pp 349-66

         Problems in Philosophy: The Limits of Inquiry. Wiley-Blackwell, Oxford, England, 1993. ISBN 978-1-5578-6475-8.

         “How Not To Solve the Mind-Body Problem” in Physicalism and Its Discontents, Gillett, Loewer (eds). Pp 284-306.

(8) Jackson, Frank; “Epiphenomenal Qualia” in Philosophical Quarterly, Vol 32 (1982). Pp 127-132.

“What Mary Didn’t Know” in Journal of Philosophy, Vol 83 (1986). Pp 291-295.

“Postscript” in Contemporary Materialism, P. Moser & J. Trout (eds). Routledge, London, England, 1995. Pp 184-189.

“Postscript on Qualia” in his Mind, Method, and Conditionals. Routledge, London, England, 1998. Pp 76-79.

(9) Jackson, Frank; “Mind and Illusion” in There’s Something About Mary: Essays on Phenomenal Consciousness and Frank Jackson’s Knowledge Argument, Ludlow, Nagasawa, Stoljar (eds). Pp 421-442.

(10) Searle, John; Minds, Brains, and Science. Harvard University Press, Cambridge, Mass, 1986. ISBN 978-0-674-57633-9.

(11) Chalmers, David; The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press, Oxford, England, 1997. ISBN 978-0-195-11789-9.

(12) Descartes, Rene; excerpt from Meditations on First Philosophy (Bk II & VI) in Chalmers, David J. (ed.); Philosophy of Mind: Classical and Contemporary Readings, Oxford University Press, Oxford, England, 2002, ISBN 978-0-19-514581-6. Pp 10-20.

(13) McGinn, Colin; The Character of Mind: An Introduction to the Philosophy of Mind, 2nd Edition, Oxford University Press, Oxford, England, 1982, ISBN 978-0-19-875208-0.

(14) Berkeley, George; excerpt from The Principles of Human Knowledge in Beakley, Brian & Ludlow, Peter (ed.); The Philosophy of Mind: Classical Problems / Contemporary Issues, 2nd Edition, The MIT Press, Cambridge, Massachusetts, 2006, ISBN 0-262-52451-1. Pp 31-34.

(15) Owen, Huw Parri; Concepts of Deity, Herder and Herder, Crossroads Publishing Company, New York, New York, 1971, ISBN 978-0-3330-1342-7.

(16) Wikipedia contributors; “Physicalism” in Wikipedia, The Free Encyclopedia. URL=<http://en.wikipedia.org/w/index.php?title=Physicalism&oldid=511083913>.

(17) Dowell, J.L.; “Formulating the Thesis of Physicalism: An Introduction” in Philosophical Studies: An International Journal of Philosophy in the Analytic Tradition, Vol 131, No 1 (Oct 2006), pp 1-23, URL=<http://www.jstor.org/stable/25471797>

(18) Stoljar, Daniel; “Physicalism” in The Stanford Encyclopedia of Philosophy (Fall 2009 Edition), Edward N. Zalta (ed.), URL=http://plato.stanford.edu/archives/fall2009/entries/physicalism/

(19) Horgan, Terence; “Mental Quasation” in Philosophical Perspectives, Vol 3, Philosophy of Mind and Action Theory (1989), pp 47-76. URL=http://www.jstor.org/stable/2214263

(20) Hume, David, A Treatise of Human Nature 2nd Edition, L. A. Selby-Bigge (ed.) revised by P.H. Nidditch, Clarendon Press, Oxford, England, 1975. Pg 108

 “For finding, with this system of perceptions there is another connected by custom, or, if you will, by the relation of cause and effect, it proceeds to the consideration of their ideas; and as it feels that ’tis in a manner necessarily determin’d to view these particular ideas, and that the custom or relation, by which it is determin’d, admits not of the least change, it forms them into a new system, which it likewise dignifies with the title of realities. The first of these systems is the object of the memory and senses; the second of the judgment.”

(21) Dowe, Phil, “Causal Processes” in The Stanford Encyclopedia of Philosophy (Fall 2008 Edition), Edward N. Zalta (ed.), URL=http://plato.stanford.edu/archives/fall2008/entries/causation-process/

(22) Kim, Jaegwon; Mind in a Physical World: An Essay on the Mind-Body Problem and Mental Causation, A Bradford Book, The MIT Press, Cambridge, Massachusetts, 2000, ISBN 978-0-2626-1153-4.

(23) McLaughlin, Brian & Bennett, Karen, “Supervenience” in The Stanford Encyclopedia of Philosophy (Winter 2011 Edition), Edward N. Zalta (ed.), URL=http://plato.stanford.edu/archives/win2011/entries/supervenience/

(24) Crane, Tim & Brewer, Bill; “Mental Causation” in Proceedings of the Aristotelian Society, Supplementary Volumes, Vol 69 (1995), pp 211-253, URL=http://www.jstor.org/stable/4107076

(25) Zangwill, Nick; “Good Old Supervenience: Mental Causation on the Cheap” in Synthese, Vol 106, No 1 (Jan 1996), pp 67-101, URL=http://www.jstor.org/stable/20117478

(26) Ehring, Douglas; “Part-Whole Physicalism and Mental Causation” in Synthese, Vol 136, No 3 (Sep 2003), pp 359-388, URL=http://www.jstor.org/stable/20118340

(27) Gibbons, John; “Mental Causation without Downward Causation” in The Philosophical Review, Vol 115, No 1 (Jan 2006), pp 79-103, URL=http://www.jstor.org/stable/20446882

(28) Kim, Jaegwon; “Supervenience and Supervenient Causation” in The Southern Journal of Philosophy, Vol 22, No S1 (Spring 1984).

(29) Dennett, Daniel C.; Consciousness Explained, Little Brown and Company, Boston, Massachusetts, 1991, ISBN 0-316-18065-3.

(30) Yablo, Stephen; “Mental Causation” in The Philosophical Review, VOl 10, No 2 (Apr 1992), pp 245-280, URL=http://www.jstor.org/stable/2185535

(31) Elder, Crawford L.; “Mental Causation versus Physical Causation: No Contest” in Philosophy and Phenomenological Research, Vol 62, No 1 (Jan 2001), pp 111-127, URL=http://www.jstor.org/stable/2653591

(32) Wikipedia contributors; “Instrumentalism” on Wikipedia, The Free Encyclopedia, URL=http://en.wikipedia.org/w/index.php?title=Instrumentalism&oldid=565791915

(33) Wikipedia contributors; “Elan vital” in Wikipedia, The Free Encyclopedia, URL=http://en.wikipedia.org/w/index.php?title=%C3%89lan_vital&oldid=562956048

(34) Huemer, Wolfgang; “Franz Brentano” in The Stanford Encyclopedia of Philosophy (Fall 2013 Edition), Edward N. Zalta (ed.), URL=http://plato.stanford.edu/archives/fall2013/entries/brentano/

(35) Dawkins, Richard; The Blind Watchmaker. W.W. Norton & Co.; New York, New York; 1987. ISBN 0-393-02216-1.

(36) Stoljar, Daniel; “Physicalism” in The Stanford Encyclopedia of Philosophy (Fall 2009 Edition), Edward N. Zalta (ed.), URL=http://plato.stanford.edu/archives/fall2009/entries/physicalism/

(37) Wikipedia contributors; “Type physicalism” in Wikipedia, The Free Encyclopedia. URL=http://en.wikipedia.org/w/index.php?title=Type_physicalism&oldid=527514709

(38) Schneider, Steven; “Identity Theory” in the Internet Encyclopedia of Philosophy, URL=http://www.iep.utm.edu/identity/

(39) Smart, J.J.C; “Sensations and Brain Processes” in The Philosophical Review, Vol 68, No 2 (Apr 1959), 141-156. URL=http://www.jstor.org/stable/2182164

(40) Putman, Hillary; Mind, Language and Reality: Philosophical Papers, Volume 2, Cambridge University Press, New York, New York, 1975.

(41) Fodor, Jerry; “Explanations in Psychology” in Philosophy in America, Max Black (ed.), (Ithaca, NY: Cornell University Press, Ithaca, New York, 1965.

Psychological Explanation: An Introduction to the Philosophy of Psychology, Random House, New York, New York, 1974.

(42) Wetzel, Linda; “Types and Tokens” in The Stanford Encyclopedia of Philosophy (Spring 2011 Edition), Edward N. Zalta (ed.), URL=http://plato.stanford.edu/archives/spr2011/entries/types-tokens/

(43) Yalowitz, Steven; “Anomalous Monism” in The Stanford Encyclopedia of Philosophy (Winter 2012 Edition), Edward N. Zalta (ed.), URL=http://plato.stanford.edu/archives/win2012/entries/anomalous-monism/

 Davidson, Donald; “Mental Events” in Essays on Actions & Events. Clarendon Press, Oxford, England, 1980.

(44) Kim, Jaegwon; Philosophy of Mind, Third Edition, Westview Press, The Perseus Books Group, Boulder, Colorado, 2010. ISBN 978-0-813-34458-4.

(45) Crane, Tim; “The Mental Causation Debate” part of “Mental Causation” by Tim Crane and Bill Brewer in Proceedings of the Aristotelian Society, Supplementary Volumes, Vol. 69, (1995), pp. 211-253

(46) Graham, George; “Behaviorism” in The Stanford Encyclopedia of Philosophy (Fall 2010 Edition), Edward N. Zalta (ed.), URL=http://plato.stanford.edu/archives/fall2010/entries/behaviorism/

(47) Sellars, Wilfred; “Philosophy and the Scientic Image of Man” in Science, Perception, and Reality, Routledge & Kegan Paul, New York, New York, 1963, pg 22.

(48) Smith, L.; Behaviorism and Logical Positivism: A Reassessment of Their Alliance, Stanford University Press, Stanford, California, 1986.]

(49) Wikipedia contributors; “Logical Positivism” in Wikipedia, The Free Encyclopedia, URL=http://en.wikipedia.org/w/index.php?title=Logical_positivism&oldid=526844476

(50) Rey, George; Contemporary Philosophy of Mind: A Contentiously Classical Approach, Wiley-Blackwell, Oxford, England. 1997.

(51) Bechtel, W. & Graham, G. (eds.); A Companion to Cognitive Science, Wiley-Blackwell, Oxford, England, 1998.

(52) Nagel, Thomas; “What it is Like to be a Bat” in The Philosophical Review, Vol. 83, No. 4 (Oct., 1974), pp. 435-450. URL=http://www.jstor.org/stable/2183914

(53) Wikipedia contributors; “Philosophical Zombie” in Wikipedia, The Free Encyclopedia. URL=http://en.wikipedia.org/w/index.php?title=Philosophical_zombie&oldid=525886965

(54) Putnam, Hilary; Mind, Language, and Reality. Cambridge University Press, Cambridge, England, 1975.

(55) Levin, Janet; “Functionalism” in The Stanford Encyclopedia of Philosophy (Summer 2010 Edition), Edward N. Zalta (ed.), URL=http://plato.stanford.edu/archives/sum2010/entries/functionalism/

(56) Wikipedia contributors; “Functionalism (Philosophy of Mind)” in Wikipedia, The Free Encyclopedia. URL=http://en.wikipedia.org/w/index.php?title=Functionalism_(philosophy_of_mind)&oldid=528239243

(57) Marr, D.; Vision: A Computational Approach. Freeman & Co., San Francisco, California, 1982.

(58) Dennett, Daniel C. The Intentional Stance. The MIT Press, Cambridge Massachusetts. 1998. ISBN 0-262-54053-3.

(59) Putnam, Hilary; “Minds and Machines” (1960) reprinted in Mind, Language and Reality, Cambridge University Press, Cambridge, Massachusetts, 1975.

“The Nature of Mental States” (1967) reprinted in Mind, Language and Reality, Cambridge University Press, Cambridge, Massachusetts, 1975.

(60) Wikipedia contributors; “Alan Turing” in Wikipedia, The Free Encyclopedia. URL=http://en.wikipedia.org/w/index.php?title=Alan_Turing&oldid=530445657

(61) Wikipedia contributors; “Jerry Fodor” in Wikipedia, The Free Encyclopedia. URL=http://en.wikipedia.org/w/index.php?title=Jerry_Fodor&oldid=529016296

(62) Wikipedia contributors; “David Lewis (Philosopher)” in Wikipedia, The Free Encyclopedia. URL=http://en.wikipedia.org/w/index.php?title=David_Lewis_(philosopher)&oldid=527490288

(63) Wikipedia contributors; “Daniel Dennett” in Wikipedia, The Free Encyclopedia. URL=http://en.wikipedia.org/w/index.php?title=Daniel_Dennett&oldid=528400151

         Dennett, Daniel; Sweet Dreams: Philosophical Obstacles to a Science of Consciousness, A Bradford Book, MIT Press, Cambridge, Massachusetts. 2006. ISBN 978-0-262-54191-6.

(64) Wikipedia contributors; “William Lycan” in Wikipedia, The Free Encyclopedia. URL=http://en.wikipedia.org/w/index.php?title=William_Lycan&oldid=523476592

         Lycan, William G.; Consciousness and Experience, A Bradford Book, MIT Press, Cambridge, Massachusetts, 1996. ISBN 978-0-262-12197-2.

(65) Dennett, Daniel; “Quining Qualia” in Mind and Cognition: A Reader, W.G. Lycan (ed.), MIT Press, Cambridge, Massachusetts, 1990.

(66) Lycan, William G; Consciousness, MIT Press, Cambridge, Massachusetts. 1987.

(67) Wikipedia contributors; “Inverted Spectrum” in Wikipedia, The Free Encyclopedia. URL=http://en.wikipedia.org/w/index.php?title=Inverted_spectrum&oldid=436315508

(68) Tye, Michael; Ten Problems of Consciousness, Bradford Books, The MIT Press, Cambridge, Mass. 1995

Consciousness, Color, and Content, Bradford Books, The MIT Press, Cambridge, Mass. 2000.

 Hardin, C.; Color for Philosophers, Hackett Publishing, Cambridge, Mass. 1993.

(69) Kirk, Robert; “Zombies” in The Stanford Encyclopedia of Philosophy (Summer 2012 Edition), Edward N. Zalta (ed.), URL=http://plato.stanford.edu/archives/sum2012/entries/zombies/

(70) Chalmers, David; The Character of Consciousness (Philosophy of Mind), Oxford University Press, New York, New York, 2010. ISBN 978-0-195-31111-2.

“Does Conceivability Entail Possibility” in Conceivability and Possibility, T.Gendler & J.Hawthorne (Eds.) Oxford University Press, New York, New York, 2002. ISBN 978-0-198-25090-6.

(71) Wikipedia contributors; “China Brain” in Wikipedia, The Free Encyclopedia. URL=http://en.wikipedia.org/w/index.php?title=China_brain&oldid=521938777

(72) Tye, Michael; “Qualia” in The Stanford Encyclopedia of Philosophy (Summer 2009 Edition), Edward N. Zalta (ed.), URL=http://plato.stanford.edu/archives/sum2009/entries/qualia/

 “Absent Qualia and the Mind-Body Problem” in The Philosophical Review, Vol 115 (2006), Pgs 139-168.

(73) For a brief Bio of John Conway see — URL=http://www-history.mcs.st-andrews.ac.uk/Biographies/Conway.html

For a Java example of the Game of Life, including a discussion of the Rules, see –   URL=<http://www.bitstorm.org/gameoflife/>.

(74) Dennett, Daniel C; Consciousness Explained. Little, Brown and Company, Boston, 1991. ISBN 0-316-18065-3

 

Armstrong, D.M.; A Materialist of the Mind, Revised Edition, Routledge, New York, New York, 1993. ISBN 978-0-415-10031-1. Kindle Edition.

Bailey, Andrew; “Supervenience and Physicalism” in Synthese, Vol 117, No 1 (1998/1999) pp 53-73, URL=<http://www.jstor.org/stable/20118097>

Beakley, Brian & Ludlow, Peter (ed.); The Philosophy of Mind: Classical Problems / Contemporary Issues, 2nd Edition, The MIT Press, Cambridge, Massachusetts, 2006, ISBN 0-262-52451-1.

Benbaji, Hagit; “Constitution and the Explanatory Gap” in Synthese, Vol 161, No 2 (Mar 2008), pp 183-202, URL=http://www.jstor.org/stable/27653688

Bickle, John; “Multiple Realizability” in The Stanford Encyclopedia of Philosophy (Fall 2008 Edition), Edward N. Zalta (ed.), URL=http://plato.stanford.edu/archives/fall2008/entries/multiple-realizability/

Block, Ned & Stalnaker, Robert; “Conceptual Analysis, Dualism and the Explanatory Gap,” in The Philosophical Review, Vol 10, No 3 (1999), Pp 315-360;

Bontly, Thomas; “The Supervenience Argument Generalizes” in Philosophycal Studies: An International Journal for Philosophy in the Analytic Tradition, vol 109, No 1 (May 2002), pp 75-96, URL=http://www.jstor.org/stable/4321265

Burge, Tyler (1973); “Reference and Proper Names” in The Journal of Philosophy Vol 70, No 14, Pgs 425-439. URL=http://www.jstor.org/stable/2025107

Burge, Tyler (1986); “Individualism and Psychology” in The Philosophical Review, Vol 95, No 1, Pg. 3-45. URL=http://www.jstor.org/stable/2185131

Chalmers, David J. (ed.); Philosophy of Mind: Classical and Contemporary Readings, Oxford University Press, Oxford, England, 2002, ISBN 978-0-19-514581-6.

Dawkins, Richard; The Selfish Gene. Oxford University Press; Oxford, England; 1976. ISBN 0-19-857519-x.

Dawkins, Richard; The Extended Phenotype. Oxford University Press, Oxford, 1982; ISBN 0-19-286088-7.

Dawkins, Richard; The Blind Watchmaker. W.W. Norton & Co.; New York, New York; 1987. ISBN 0-393-02216-1.

Dawkins, Richard; Climbing Mount Improbable. W.W. Norton & Co., New York, New York. 1997. ISBN 0-393-03930-7.

Dawkins, Richard.; The Ancestor’s Tale. A Mariner Book / Houghton Mifflin Company, New York, New York. 2004. ISBN 0-618-61916-x.

Dennett, Daniel C; Content and Consciousness, Routledge and Kean Paul, London, 1969.

Dennett, Daniel C; Brainstorms, Philosophical Essays on Mind and Psychology, Bradford Books, Montgomery, Vt. 1978.

Dennett, Daniel C; Consciousness Explained. Little, Brown and Company, Boston, 1991. ISBN 0-316-18065-3

Dennett, Daniel C; Darwin’s Dangerous Idea. Touchstone Books, New York, 1995; ISBN 0-684-82471-x.

Dennett, Daniel C; Kinds of Minds: Towards and Understanding of Consciousness. Basic Books (Harper Collins Publishers), New York, 1996. ISBN 0-465-07351-4.

Dennett, Daniel C; BrainChildren: Essays on Designing Minds.   The MIT Press, Cambridge Massachusetts. 1998. ISBN 0-262-54090-8.

Dennett, Daniel C; The Intentional Stance. The MIT Press, Cambridge Massachusetts. 1998. ISBN 0-262-54053-3.

Dennett, Daniel C; Freedom Evolves. Viking Penguin Press, New York, New York. 2003. ISBN 0-670-03186-0.

Dennett, Daniel C; Sweet Dreams: Philosophical Obstacles to a Science of Consciousness. The Jean Nicod Lectures, The MIT Press, Cambridge, Massachusetts. 2005. ISBN 0-262-04225-8.

Dretske, Fred (1996); “Phenomenal Externalism, or If Meanings Ain’t in the Head, Where are Qualia?”, in Philosophical Issues, Vol 7: Perception, Pg 143–158. URL=http://www.jstor.org/stable/1522899

Fodor, Jerry (1987); Psychosemantics: The Problem of Meaning in the Philosophy of Mind, Bradford Books, The MIT Press, Cambridge, Massachusetts. ISBN 978-0-272-56052-8.

Fodor, Jerry (2003); Hume Variations (Lines of Thought), Clarendon Press, Oxford University Press, Oxford, England. ISBN 978-0-19-928733-2.

Garcia-Carpintero, Manual; “The Supervenience of Mental Content” in Proceedings of the Aristotelian Society, New Series, Vol 94 (1994), pp 117-135, URL=http://www.jstor.org/stable/4545190

Gertier, Brie; “The Explanatory Gap is Not an Illusion: Reply to Michael Tye” in Mind, New Series, Vol 110, No 439 (Jul 2001), pp 689-694, URL=http://www.jstor.org/stable/3093652

Gillett, Carl & Loewer, Barry (eds.); Physicalism and Its Discontents. Cambridge University Press. Cambridge, England, 2001. ISBN 978-0-521-80175-1.

Graham, George & Horgan, Terence; “Mary, Mary, Quite Contrary” in Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, Vol 99, No 1 (May 2000) pp 59-87, URL=http://www.jstor.org/stable/4321045

Guttenplan, Samuel (ed.); A Companion to the Philosophy of Mind, Blackwell Publishing, Oxford, England, 1994, ISBN 978-0-631-19996-0.

Hameroff, Stuart R. & Kaszniak, Alfred W. & Chalmers, David (Eds): Towards a Science of Consciousness III: The Third Tucson Discussions and Debates, A Bradford Book, The MIT Press, Cambridge, Mass. 1999. ISBN 978-0-2625-8181-3.

Hardcastle, Valerie Grey; “On the Matter of Minds and Mental Causation” in Philosophy and Phenomenological Research, Vol 58, No 1 (Mar 1998), URL=http://www.jstor.org/stable/2653628

Hardin, C. L.; “Qualia and Materialism: Closing the Explanatory Gap” in Philosophy and Phenomenological Research, Vol 48 No 2 (1987) Pp 281-98.

Harman, Gilbert; “Can Science Understand the Mind?” in Conceptions of the Human Mind: Essays in Honor of George A. Miller, G. Harman (ed.). Lawrence Erlbaum, Hillsdale, N.J., 1993: Pp 111-121.

Harman, Gilbert; “Immanent and Transcendent Approaches to Meaning and Mind” in his Reasoning, Meaning, and Mind. Oxford University Press ,Oxford, England, 1999. Pp 262-275.

Horgan, Terence; “Functionalism, Qualia, and the Inverted Spectrum” in Philosophy and Phenomenological Research, Vol 44, No 4 (Jun 1984), pp 453-469, URL=http://www.jstor.org/stable/2107613

Howell, Robert J.; “The Knowledge Argument and Objectivity” in Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, Vol 135, No 2 (Sep 2007), pp 145-177, URL=http://www.jstor.org/stable/40208745

Jackson, Frank; “Epiphenomenal Qualia” in The Philosophical Quarterly, Vol 32, No 127 (Apr. 1982), pp 127-136, URL=http://www.jstor.org/stable/2960077

Jackson, Frank; “Mental Causation” in Mind, New Series, Vol 105, No 419 (Jul 1996), pp 377-413, URL=http://www.jstor.org/stable/2254828

Jackson, Frank; “What Mary Didn’t Know” in The Journal of Philosophy, Vol 83, No 5 (May 1986), pp 291-295, URL=http://www.jstor.org/stable/2026143

Kallestrup, Jesper; “Epistemological Physicalism and the Knowledge Argument” in American Philosophical Quarterly, Vol 43, No 1 (Jan 2006), pp 1-23, URL=http://www.jstor.org/stable/20010220

Kim, Jaegwon; “On the Psycho-Physical Identity Theory,” The American Philosophical Quarterly, Vol 3 (1966), Pg 227-235.

Kim, Jaegwon; Philosophy of Mind, 3rd Edition, Westview Press, Perseus Books Group, Philadelphia, Pennsylvania, 2011, ISBN 978-0-8133-4458-4.

Kirk, Robert; “Physicalism and Strict Implication” in Synthese, Vol 151, No 3 (Aug 2006), pp523-536, URL=http://www.jstor.org/stable/20118826

Kirk, Robert; “The Inconceivability of Zombies” in Philosophical Studies: An International Journal of Philosophy in the Analytic Tradition, Vol 139, No 1 (May 2008), pp 73-89, URL=http://www.jstor.org/stable/40208892

Kripke, Saul A (1980); Naming and Necessity. Harvard University Press. Cambridge, Massachusetts. 1980. ISBN 0-674-59845-8.

Kripke, Saul A. (1982); Wittgenstein on Rules and Private Language, Harvard University Press, Cambridge, Massachusetts. ISBN 0-674-95401-7.

Kripke, Saul A. (1976); “A Puzzle About Belief” in Meaning and Use (Studies in Linguistics and Philosophy), A. Margalit (ed.), D. Reidel Publishing Company, Dordrecht, Holland, 1976. Pgs 239-283.

Ladd, George Trumbull; Philosophy of Mind, Hard Press, Miami, Florida, Orig. Pub. Date 1895, ISBN 978-12-903167-8.

Levin, Janet; “Is Conceptual Analysis Needed for the Reduction of Qualitative States?” in Philosophy and Phenomenological Research, Vol 64, No 3 (May 2002), pp 571-591, URL=http://www.jstor.org/stable/3070969

Macdonald, Graham; “Emergence and Causal Powers” in Erkenntnis, Vol 67, No 2 (Sep 2007), pp 239-253, URL=http://www.jstor.org/stable/27667927

Marras, Ausonio; “On Putnam’s Critique of Metaphysical Realism: Mind-Body Identity and Supervenience” in Synthese, VOl 126, No 3 (Mar 2001), pp 407-426, URL=http://www.jstor.org/stable/20117118

McLaughlin, Brian P. & Beckermann, Ansgar & Walter, Sven (eds.); The Oxford Handbook of Philosophy of Mind, Oxford University Press, Oxford, England, 2009, ISBN 978-0-19-959631-7.

Pinker, Steven; How the Mind Works, W.W.Norton & Company, New York, New York, 1997, ISBN 0-393-31848-6.

Polycn, Karol; “Phenomenal Consciousness and Explanatory Gap” in Diametros, Vol 6 (2005), Pp 62-52;

Raffman, Diana; “Even Zombies Can Be Surprised: A Reply to Graham and Horgan” in Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, Vol 122, No 2 (Jan 2005), pp 189-202, URL=http://www.jstor.org/stable/4321553

Robinson, Howard; “Dennett on the Knowledge Argument” in Analysis, Vol 53, No 3 (Jul 1993), pp 174-177, URL=http://www.jstor.org/stable/3328467

Schroder, Jurgen; “Physicalism and Strict Implication” in Synthese, Vol 151, No 3 (Aug 2006), pp 537-545, URL=http://www.jstor.org/stable/20118826

Searle, John R. (1958); “Proper Names” in Mind, New Series, Vol 67, No 266 (Apr 1958), Pgs 166-173. URL=http://www.jstor.org/stable/2251108

Searle, John R. (1969); Speech Acts: An Essay in the Philosophy of Langauge. Cambridge University Press, Cambridge, England.

Searle, John R. (1983); Intentionality: An Essay in the Philosophy of Mind. Cambridge University Press, Cambridge, England.

Searle, John R. (1998); Mind, Language, and Society: Philosophy in the Real World, Basic Books, Perseus Books Group, New York, New York. ISBN 978-0-465-04521-1.

Shea, Nicholas; “Does Externalist Entail the Anomalism of the Mental” in The Philosophical Quarterly, Vol 53, No 211 (Apr 2003), pp 201-213, URL=http://www.jstor.org/stable/3542864

Stueber, Karsten R.; “Mental Causation and the Paradoxes of Explanation” in Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, Vol 122, No 3 (Feb 2005), URL=http://www.jstor.org/stable/4321560

Tanney, Julia; “On the Conceptual, Psychological, and Moral Status of Zombies, Swamp-Beings, and Other ‘Behaviorally Indistinguishable Creatures'” in Philosophy and Phenomenological Research, Vol 69, No 1 (Jul 2004), pp 173-186, URL=http://www.jstor.org/stable/40040708

Thomasson, Amie; “A Nonreductivist Solution to Mental Causation” in Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, Vol 89, No 2/3 (Mar 1998), pp 181-195, URL=http://www.jstor.org/stable/4320818

Tye, Michael; “Phenomenal Consciousness: The Explanatory Gap as a Cognitive Illusion” in Mind, New Series, Vol 108, No 432 (Oct 1999), pp 705-725, URL=http://www.jstor.org/stable/2660075

Tye, Michael: Consciousness Revisited: Materialism without Phenomenal Concepts, MIT Press, Cambridge, Mass. 2009. Pg 237. ISBN 0-262-01273-1. Kindle Edition.

Van de Laar, Tjeed; “Dynamical Systems Theory as an Approach to Mental Causation” in Journal for the General Philosophy of Science, Vol 37, No 2 (Oct 2006), pp 307-332, URL=http://www.jstor.org/stable/25171349

Webster, William Robert; “Human Zombies are Metaphysically Impossible” in Synthese, Vol 151, No 2 (Jul 2002), pp 297-310, URL=http://www.jstor.org/stable/20118803

Yablo, Stephen; “Mental Causation” in The Philosophical Review, Vol 10, No 2 (Apr 1992), pp 245-280

Scroll to Top