LINEBURG


<< . .

 24
( 29)



. . >>

on too hard, or chewed on too eagerly by tigers. It is against this sort
of stable background that the normal human cognizer is studied as a
natural kind.Thus we study human cognizers not just as a current kind
of interesting physical object, but after having taken a peek at historical
human life lines, as opposed to human life ends, and after examining
the environmental contexts of these differences. A question that needs
examining, however, is what it means exactly to claim that the sort of


209
environment just described is “the normal one” in which human cog-
nition takes place.
First, it will help to ward off a possible confusion in the wake of the
discouraging words uttered above about survival chances for most
species. Haven™t I claimed that in the environment that is statistically
normal for a species, the environment in which animals of that species
typically find themselves, the animal dies? It dies before maturing or re-
producing.Then doesn™t it follow that we must study the individual not
in the normal environment but in an especially lucky one? But of
course it is only over their whole lives that the statistics on individuals
are so terrible. Hour by hour, supporting rather than threatening envi-
ronments may be statistically normal. So there may after all be some rel-
atively fixed and stable set of conditions, for many species, relative to
which the lifeline mechanisms of its members can be studied, deaths
before reproduction being viewed as caused by temporary disruptions
of these conditions. Similarly, although nobody doubts that human cog-
nition requires a supporting environment, perhaps it requires, on the
whole, merely the same mundane set of stable supporting conditions
that sustains the human body from hour to hour. Against this steady
background environment, the human, including the cognitive systems,
might be studied purely as a natural kind. That, I believe, is the image
most have of the study of human cognition.
But there is something important left out of this picture. What is left
out is the fuzz on the individual lifelines. Remember Tabby in search of
her dinner, Grackling in search of a mate, and Rover with sand in his
eye. In general, the behaviors of animals effect loops through the envi-
ronment that feed back into their lifelines only under quite special con-
ditions, conditions that are not statistically average at all. Moreover the
various mechanisms controlling different kinds of behaviors each require
different supporting conditions. Each behavior has its own special needs.
Tabby™s hunting behavior requires a proximate mouse or bird that is not
too wary and fleet; Grackling™s dancing behavior requires a proximate
female who is willing, and so forth. The job of the cognitive systems is
to collect information about the specifics of the environment on which
such behaviors will be based.The question arises, then, whether the cog-
nitive systems also have fuzz on them “ whether they, too, require spe-
cial supporting conditions that vary with the tasks to be performed.
A contemporary tradition in epistemology has it that whether a
thinker has knowledge as opposed to true belief is determined by a
partly serendipitous relation between thinker and environment. Con-


210
trary to Plato™s claims, there is cognitive luck involved in knowing.
More fundamental, cognitive luck is required for success in thinking OF
things, for success in entertaining coherent propositions. Environmental
luck is required for the cognitive systems to maintain a coherent inner
representational system. This means that cognitive psychology must be
the study of happy interactions with the environment, an essentially
ecological study. This follows from the externalist view of mental se-
mantics I have been presenting in this book.
Assume that the central job of the cognitive systems is to collect in-
formation over time, to amplify this information through inference, and
to bring it to bear in determining action. Note that amplificatory infer-
ence always depends on a middle term (Section 10.2). In order to make
valid amplificatory inferences, then, the cognitive systems must be able
to tell when various separate bits of information that have been col-
lected over time concern the same thing and when they concern differ-
ent things. Similarly, whenever information that has been collected is
brought to bear upon action. From this we have concluded that a cru-
cially important task that must continually be performed by the cogni-
tive systems is managing to recognize when new information coming in
concerns the very same thing again, something one already knows
something about.Without this, none of the information taken in can be
used. Without this the representational system would become wholly
corrupted. Its representations would cease to have any clear meanings,
becoming hopelessly referentially equivocal or, at the limit, referentially
empty. The capacity correctly to recognize sources of incoming infor-
mation2 is a requirement for having any coherent thought at all.
This then is the question to be pressed. Does this capacity rest
merely on the same mundane set of stable supporting conditions that
sustains the human body from hour to hour, or does it have its own
special environmental needs, differing perhaps from one cognitive task
to another?
That our powers of recognition can fail is obvious enough. Take
places or spouses, colors, minerals, tunes, species, buildings, diseases “
whatever it is, you can misidentify it. It is possible to construct condi-
tions “ external conditions “ under which someone completely famil-
iar with it may still fail to recognize it. Are such failures the fault of the
cognitive systems, or is it epistemic bad luck that sometimes puts these
systems beyond their powers?

2 “ informationC. See Appendix B.



211
We should keep clearly in focus what the cognitive systems are for.
Their mission is not, for example, the acquisition of justified certainty.
They are not at fault or malfunctioning when they take risks, when
they rely on environmental stability. As modern skeptics are well aware,
no one lives by justified certainty. Justified certainty is not what is
needed to advance the lifeline. Instead, once again we find at work the
principles of multiplication and division. Having many different fallible
methods of recognizing the same person, the same mineral, the same
species, the same disease, some methods that can be used under some
conditions, others under other conditions, employing these methods re-
dundantly whenever possible, employing each whenever an opportunity
for it happens to arise “ this is the strategy that gets us by. Much of the
time it gets us by. But every one of these diverse methods requires its
own unique sort of environmental support.
Consider a stereoscope that produces an illusory three-dimensional
image by causing the visual systems to misidentify. It causes them to take
visual contents derived from two different objects as though derived
from the same source, thus creating the illusion of a three-dimensional
scene (Section 10.3). The illusory image is not formed due to a mal-
function within the (internal) visual system. The visual system is not
broken or reacting in a way it should not when forming such an image.
Instead, the environment is abnormal “ not abnormal in some general
feature constantly needed to sustain human life, but in a very specific
feature, needed to sustain correct binocular vision.
Sometimes we recognize people by their faces, sometimes by their
stature and walk, sometimes by their voices, sometimes by their names.
But an uncooperative environment can produces two people who look
(at least for the moment, or at least from here) just too much alike, or
who sound (in this context) just too much alike, or who have exactly
the same name. No matter how carefully our recognizing abilities are
tuned, and no matter how clever the various mechanisms by which they
work, providence will sometimes put up misleading signs. Coherent
thinking rests, not on some one steady set of normal environmental
conditions, but on a vast variety of special circumstances, each required
for proper exercise of a different recognition skill.
Like the species lines, and the individual lifelines, and the little lines
representing behaviors, the cognitive lines, too, often get broken off by
the environment. Just as the ability to live on and to multiply requires
environmental support, the ability to maintain coherent thoughts “ to
have clear ideas “ requires environmental support.


212
Appendix A
Contrast with Evans on Information
Based Thoughts



The theory I have presented of substance concepts and the thoughts
governed by them is similar in a number of respects to Evans™ theory of
“information based thoughts” in The Varieties of Reference (1982). Evans™
information based thoughts were thoughts containing information de-
rived from perception or testimony, where the thinker also had “an ad-
equate concept” of the information™s source. Evans is not altogether
clear, however, on what “information” is supposed to be. Initially
(p. 122n), he refers us to J. J. Gibson (1968), but his subsequent discus-
sion, which makes reference to informational states that “fail to fit” their
own objects, “decaying” information (p. 128n), “garbled” information
(p. 129), informational states that are “of nothing” (p. 128) and so forth,
is glaringly inconsistent with Gibson™s conception of information.
The clearest images Evans presents us are information contained, on
the one hand, in a photograph, and on the other, it seems, in a percept
(not, as Gibson would have had it, in energy impinging on sensory sur-
faces). But “[a]n informational state may be of nothing: this will be the
case if there was no object which served as input to the information
system when the information was produced” (p. 128). On the other
hand, “two informational states embody the same information provided
they result from the same informational event . . . even if they do not
have the same content: the one may represent the same information as
the other, but garbled in various ways” (pp. 128“9).Thus it seems that an
“informational state” need not contain any information at all, and that
when it does “embody” or “represent” information this need not coin-
cide with its “content.” What then makes it into an “informational
state”? What determines its “content”? And what determines what the
informational state “represents”?


213
Of a photograph, Evans says,

A certain mechanism produces things which have a certain informational content.
. . . The mechanism is a mechanism of information storage, because the proper-
ties that figure in the content of its output are (to a degree determined by the ac-
curacy of the mechanism) the properties possessed by the objects which are the
input to it. And we can say that the product of such a mechanism is of the objects
that were the input to the mechanism when the product was produced. Corre-
spondingly, the output is of those objects with which we have to compare it to
judge the accuracy of the mechanism at the time the output was produced. . . .
Now this structure can be discerned whenever we have a system capable of
reliably producing states with a content which includes a certain predicative
component, or not, according to the state of some object. (The structure is of
course discernable even if, on some particular occasion, the system malfunc-
tions.) (Evans 1982, pp. 124“5)

From this I take it that Evans™ “information” results from operation of
a system that “reliably” produces certain output properties as a func-
tion of certain input properties even though it may sometimes be in-
accurate or malfunction, and that its “content” is determined by refer-
ence to the properties that the input either has, or would have had if
that same output had been produced when the mechanism was func-
tioning properly.The information is about the object or objects directly
causing the input, granted these objects are of the same sort that pro-
duce input to the device when functioning properly. Otherwise we
have an “informational state” that is not “about” anything hence car-
ries no “information.” The properties of the inputting object(s) about
which the informational state embodies information are those proper-
ties of the object that the mechanism would have been guided by in
producing its output had it been functioning properly. Evans calls these
properties, granted there was some input object of the right sort, “the
information represented.” Thus it happens that an informational state
that misrepresents “represents the same information” as one that repre-
sents correctly.
Clearly we must be very careful here not to equivocate on the notion
“what is represented.” Perhaps we usually think of “what is represented”
as the intentional content of a representation. But for Evans, the inten-
tional content is called “content” and “what is represented” is what was
supposed to have been the intentional content, that is, what would have been
the intentional content had the mechanism operated properly. “What is



214
represented” is whatever properties are at the source that produces the
informational state, granted the source is of the right general kind.
This notion of information is blatantly non-Gibsonian, and (more fa-
miliar to philosophers, perhaps) blatantly non-Dretskean (Dretske
1980). It is not the kind of “information” that was a “common com-
modity” in the world long before organisms came along to use it.
Rather, this notion loudly demands prior analysis of the normative no-
tions, “accuracy,” “malfunction,” and even, I suggest, “reliable” “ notions
that can find no footing prior to the interests of organisms.
Leave aside, for the moment, questions about what kind of norma-
tivity might be involved with this kind of “information.” I have pro-
posed an interpretation of Evans™ analysis of the intentional content cor-
responding to the “predicative component” of an information bearing
state. This content is given by reference to what the properties at the
source causing the informational state would have to be if the informa-
tional system were giving this output when functioning properly. It is
not given by the actual properties of the input. Similarly, Evans is very
insistent that the fact that a certain object causes the input to the infor-
mational system does not constitute its being an intentional object (sub-
ject) of the information bearing output. To adopt that position would
be to adopt the “photograph” model of what a thought is of, against
which Evans argues at length. Rather, it seems, for the information
bearing state to have an intentional subject “ for it to be a thought of
something “ a “fundamental idea” of its object must be supplied/ap-
plied. I have advocated abandonment of the theory of fundamental
ideas, however (Section 13.4). And we can, I believe, easily reconstruct
an account of the intentional object (subject) of thought along Evans™
lines without reference to fundamental ideas.
Evans remarks on “what is perhaps the central feature of our system
of gathering information from individuals: namely the fact that we
group pieces of information together, as being from the same object “
that we collect information into bundles” (p. 126). This collecting to-
gether, Evans calls “reidentification” of the subject of information.
Evans™ thesis is that only when thinkers “have the capacity” to reidentify
the objects of their thought, are they actually thinking of anything. Ca-
pacities, for him, seem to be something like reliable dispositions. (Actu-
ally, it is very unclear what they are, so we must guess.) Thus, it appears,
just as one thinks of a property when one™s cognitive systems are “capa-
ble of reliably producing states with a content which includes a certain



215
predicative component, or not, according to the state of some object”
(p. 125), similarly, one thinks of an object only when one™s cognitive
systems are capable of reliably producing informational states about that
object that get bundled together, the object thus being reidentified.
To get from Evans™ position, thus interpreted, to the one I advocate,
a number of adjustments are required. First, we must replace Evans™ idea
of what the system regularly does, with what it has the ability to do, that
is, in part, what it is the, or a, proper function of the system to do, given
its evolutionary history and its learning history (Section 4.6).That is the
way I would unpack the normativity implicit in Evans™ references to
“accuracy” and “malfunction.”
Second, we must replace Evans™ notion “reidentify” with the notion
“coidentify” (Section 10.2) or, when the intentional significance of this
act is our focus, with the notion “reidentify.” Reidentifying something
is not just thinking of it again, nor is it making an identity judgment. It
is marking an informational state with a sameness marker, in preparation
for its use as a middle term in inference, or an analogue of inference
(Section 10.2).
Third, we must take the notion “information” apart, carefully separat-
ing natural information from intentional information, that is, from the
content of an intentional representation such as an inner representation.
The form of natural information that is important here is the general
form that I call “natural informationC,” as contrasted with Gibson™s and
Dretske™s notions of natural information (see Appendix B). Intentional
information is what is represented by an intentional representation when
the representation is true, and true in accord with a normal explanation
for proper functioning of the representation producing devices that
formed it. Now we can put the matter this way. One thinks of an object
(represents it conceptually) only when one™s cognitive systems have the
ability (Chapter 4) to translate natural informationC about the object
into intentional information about it such that the mental representa-
tions carrying this information are correctly marked with sameness
markers as suitable for coidentification (Section 10.2).




216
Appendix B
What Has Natural Information to
Do with Intentional
Representation?1


“According to informational semantics, if it™s necessary that a creature can™t
distinguish Xs from Ys, it follows that the creature can™t have a concept that
applies to Xs but not Ys.”
(Jerry Fodor, The Elm and the Expert, p. 32)


There is, indeed, a form of informational semantics that has this verifi-
cationist implication. The original definition of information given in
Dretske™s Knowledge and the Flow of Information (1981, hereafter KFI), when
employed as a base for a theory of intentional representation or “content,”
has this implication. I will argue that, in fact, most of what an animal
needs to know about its environment is not available as natural informa-
tion of this kind. It is true, I believe, that there is one fundamental kind of
perception that depends on this kind of natural information, but more so-
phisticated forms of inner representation do not. It is unclear, however, ex-
actly what “natural information” is supposed to mean, certainly in Fodor,
and even in Dretske™s writing. In many places, Dretske seems to employ a
softer notion than the one he originally defines. I will propose a softer
view of natural information that is, I believe, at least hinted at by Dretske,
and show that it does not have verificationist consequences. According to
this soft informational semantics, a creature can perfectly well have a rep-
resentation of Xs without being able to discriminate Xs from Ys.
I believe there is some ambivalence in Dretske™s writing about nat-
ural information, especially noticeable when comparing KFI to Ex-
plaining Behavior (1991, hereafter EB), but if we ignore some of Dretske™s

1 This chapter is drawn from a paper that was originally presented at the Conference of the
Royal Institute of Philosophy on Naturalism, Evolution and Mind in July 1999.



217
examples, the explicit statement of the theory in KFI is univocal. This
theory is also strongly suggested in Fodor™s work on mental content
(1990, 1994, 1998) and seems to be consonant with J. J. Gibson™s use of
“information” as well.
According to Dretske,

A signal r carries the information that s is F = The conditional probability
of s™s being F, given r (and k), is 1 (but, given k alone, less than 1). (KFI,
p. 65)

Dretske™s “k” stands for knowledge already had about s. Knowledge that
p is belief that is caused by information that p. It follows that a signal
carries the information that s is F when either it alone, or it taken to-
gether with some other signal that has also been transmitted to the re-
ceiver, returns a probability of 1 that s is F.Thus, I suggest, we can drop
the parenthetical “and k” in the formulation and just say that a signal
carries the information that s is F if it is an operative part of some more
complete signal, where the conditional probability that s is F, given the
complete signal, is 1 but would not be 1 without the part. Thus we
eliminate reference to knowing.
What is meant by saying, in this context, that the occurrence of one
thing, “the signal,” yields a probability of 1 that another thing, “s being
F,” is the case? In a footnote, Dretske explains:

In saying that the conditional probability (given r) of s™s being F is 1, I mean to
be saying that there is a nomic (lawful) regularity between these event types, a
regularity which nomically precludes r™s occurrence when s is not F. There are in-
terpretations of probability (the frequency interpretation) in which an event
can fail to occur when it has a probability of 1 . . . but this is not the way I
mean to be using probability in this definition. A conditional probability of 1
between r and s is a way of describing a lawful (exceptionless) dependence be-
tween events of this sort. . . . (KFI, p. 245)

and in the text he tells us:

Even if the properties F and G are perfectly correlated . . . this does not
mean that there is information in s™s being F about s™s being G. . . . For the
correlation . . . may be the sheerest coincidence, a correlation whose persis-
tence is not assured by any law of nature or principle of logic. . . . All Fs can
be Gs without the probability of s™s being G, given that it is F, being 1.
(pp. 73“4)




218
The probability that s is F given r must follow, it appears here, given
merely logic and natural law. That is, the necessity must be strict natural
necessity.2
The next question concerns the reference classes intended when re-
ferring to “the probability that s is F, given r.” r was said to be a signal
and s being F would seem to be a state of affairs, but if there are causal
laws necessitating the one given the other, these laws must be general.
There must be certain general aspects under which we are considering
r, and the fact that s is F, by which they are connected in a lawful way.
They cannot be connected in a lawful way merely as an individual oc-
currence and an individual fact. It must be a certain type of signal that
determines, with a probability of 1, a certain type of fact. And this will
yield two reference classes for the probability, the class of “signals” of a
certain type and the class of facts of a certain type, such that the prob-
ability that a signal of that type is connected with a fact of that type is

<< . .

 24
( 29)



. . >>

Copyright Design by: Sunlight webdesign