The issue of methodological solipsism in the philosophy of mind and psychology has received enormous attention and discussion in the decade since the appearance Jerry Fodor's "Methodological Solipsism" [Fodor 1980]. But most of this discussion has focused on the consideration of the now infamous "Twin Earth" type examples and the problems they present for Fodor's notion of "narrow content". I think there is deeper and more general moral to be found in this issue, particularly in light of Fodor's more recent defense of his view in Psychosemantics [Fodor 1987]. Underlying this discussion are questions about the nature and plausibility of the claim that scientific explanation should observe a constraint of methodological individualism. My goal in what follows is to bring out this more general problem in Fodor's "internalist" account of the mental.
My interest here lies in part in the role which a misuse of "methodological individualism" plays in Fodor's arguments about psychological taxonomy. But I also wish to use this as a kind of case study in examining the larger questions of individualism and context-dependence in scientific explanation. I'll thus be taking this as an opportunity to offer a suggestion which I see as having significantly more general application in clarifying the use and misuse of such notions in the philosophy of science. Roughly, I will claim that by making the appropriate explicit connections between the idea of locality and levels of organization in the higher-level sciences, we can make clear the fallacy in a particular kind of argument which is sometimes offered in support of an individualistic position in various domains.
Fodor's project in chapter 2 of Psychosemantics is largely to argue for the claim that psychological taxonomy ought to treat physically identical brain states as psychologically type identical -- that is, to argue that the state characterized by our psychological taxonomy should supervene on just the local physical properties of our brains. As he puts it, there are two main points in his argument:
Methodological point: Categorization in science is characteristically taxonomy by causal powers. Identity of causal powers is identity of causal consequences across nomologically possible contexts.Metaphysical point: Causal powers supervene on local microstructure. In the psychological case, they supervene on local neural structure. [Fodor 1987, p.44]
Fodor's claim here is importantly stronger than what he (if not everyone) would call "methodological individualism". Methodological individualism (for Fodor) is the claim that psychological states are to be individuated only with respect to their causal powers -- roughly, the idea of Fodor's "methodological point" above. Now there may be some concerns about even the first clause of the methodological point (e.g. sometimes scientific taxonomy may be in part for capturing aspects of the etiology of structures -- as perhaps in functional and evolutionary explanation [See McClamrock, forthcoming]), but I will generally accept this suggestion for present purposes.
More important to note in the present context is that the methodological point explicitly does not assume supervenience on local physical structure. As Fodor puts it himself,
...there is nothing to stop principles of individuation from being simultaneously relational and individualistic. Individualism does not prohibit the relational individuation of mental states; it just says that no property of mental states, relational or otherwise, counts taxonomically unless it affects causal powers.... I've taken it that individualism is a completely general methodological principle in science; one which follows simply from the scientist's goal of causal explanation.... [But] taxonomic categories in science are often relational... Thus, being a planet is a relational property par excellence, but it's on that individualism permits to operate in astronomical taxonomy. For whether you are a planet affects you trajectory, and your trajectory determines what you can bump into; so whether you're a planet affects your causal powers, which is all the individualism asks for. [Fodor 1987, pp. 42-3]
Individualism alone would then allow for the possibility of a psychological taxonomy which was relational, as long as those relations are "causally affecting the object" -- i.e. as long as they play a causal role in the production of behavior. In what follows, I will suggest that the "metaphysical" point of local supervenience of the psychological on the physical is hardly assured -- and likely false -- for just this kind of reason: Relational properties of mental states play the kind of role in the production of behavior which supports their use in the taxonomy of belief-desire level psychological explanation. Further, and more importantly for my general point, I'll go on to show how the methodological point is misused by Fodor in defense of local supervenience.
But before moving on to these, a small digression about methodological solipsism is in order. Methodological solipsism is the claim is that taxonomizing psychological states via their contents (as given through opaque construals of propositional attitude ascriptions) is "give or take a little" compatible with taxonomizing them with respect to their formal properties; that "mental states are distinct in content only if they can be identified with relations to formally distinct representations." [Fodor 1980, p.64] And although it's not entirely clear what being formal amounts to here (Fodor admits that this notion will "have to remain intuitive and metaphoric"), this much is clear: Formal properties of mental representations are local: they are internal to the organism, and not dependent on relations to the external environment.
But as Putnam [esp. 1975 and 1984] and Burge [exp. 1982] (among others) have pointed out, examples of the effect of external context on the content of propositional attitudes are common. Whether one believes that they have arthritis, or that pans are made out of silver, depends at least in part on the social use of the associated terms in one's language, even if the socially ideal conditions for application of the terms have not been entirely internalized. Whether you believe you have arthritis depends on what `arthritis' means in your society's language, and that (a la the linguistic division of labor) depends in part on what the experts say.
In acknowledging the point that there may well be some non-internal constraints on the content of opaque propositional attitude ascriptions -- at least those brought out by the standard indexical and natural kind cases -- Fodor qualifies his methodological solipsism. As he puts it, "barring caveats previously reviewed, it may be that mental states are distinct in content only if they are relations to formally distinct mental representations.... at least insofar as appeals to content figure in accounts of the mental causation of behavior." [Fodor 1980, pp. 67-8; my emphasis]
The intent of this qualified formality condition is to claim that there is some kind of content ("narrow content") which does respect the formality condition, and which will provide the taxonomy of our "more mature" intentional (computational) psychology. The burden of finding some such notion of narrow content is an onerous one, however. (See Fodor 1987, exp. ch.3-4 for Fodor's own worries on this subject.) But importantly, the existence of narrow content depends critically on just the position which Fodor stakes out in chapter 2 of Psychosemantics, and the one which I will be questioning throughout what follows: The claim that whenever content differences do show up in differences of behavior, that there must be a difference in formal (and thus also in internal physical) state -- that is, that psychological taxonomy ought to treat physically identical brain states as psychologically type identical. I'll now turn to that claim.
Why should we think that the causal powers of objects supervene on their local microstructure alone -- as Fodor glosses it, that "if you're interested in causal explanation, it would be mad to distinguish between [physically type-identical] brain states"? [Fodor 1987, p.34] Of course, it's clear that the bare assumption of materialism -- "weak supervenience" or "vapid materialism", as John Haugeland has dubbed it [Haugeland 1982] -- is not enough to guarantee this. Materialism claims that if you fix the physics of the universe, you fix its higher-level properties as well. But this is entirely compatible with the possibility that lots of the interesting and perhaps even scientifically respectable properties of objects and structures in the universe are relational (e.g. teleological, informational, etc.). So nothing about basic materialism alone guarantees Fodor's "metaphysical" point.
He seems at one point to acknowledge this; as he says, "you can't affect the causal powers of a person's mental states without affecting his physiology. That's not a conceptual claim or a metaphysical claim, of course. It's a contingent fact about how God made the world. God made the world such that the mechanisms by which environmental variables affect organic behavior run via their effects on the organism's nervous system." [Fodor 1987, pp. 39-40] This is, of course, a little puzzling. Why should Fodor call the claim that causal powers supervene on local microstructure his "metaphysical point" while at the same time claiming that it's a contingent and non-metaphysical fact that you can't affect the causal powers of mental states without affecting the underlying physiology?
In fact, Fodor himself notes that there is a least some sense in which physically type-identical brain states might well have different causal powers: Say "get water" to Oscar here on Earth, and he will bring H2O; make the same sounds to his twin on Twin Earth, and he will bring XYZ. For our purposes, we should note that the differing causal consequences of physically type-identical brain states in different contexts will go far beyond this. Since on the wide notion of content, the contents of mental states are affected by social facts -- e.g. what the experts believe --- type identical brain states may often have different consequences in much more pervasive ways. Suppose Oscar and his twin both have mental states they would characterize by saying "I want to drive to Cleveland", but the route to the city called "Cleveland" is somewhat different on Twin Earth (although the difference has yet to show up in the Oscars' brains). Then their (initially) type-identical brain states will lead them to significantly different behaviors, as they progress through the use of maps, exit markers, and directions from service-station attendants. Of course, these differences in behavior are quite systematic. From here in Chicago, I can make a good prediction that Oscar will end up travelling east, even before he knows that, and in spite of the fact that his twin may well end up heading west (given the different respective locations of "Chicago" and "Cleveland" on Twin Earth). In fact, even their local behaviors might be identical -- they might both intentionally head south on I-94 and take the third exit off to the right (labeled "Cleveland", of course). But none of these local similarities impugns the prediction about their eventual directions of travel.
This kind of systematicity and predictability in behavior would seem to underlie a significant part of the usefulness of real intentional accounts of action. Although the emphasis from rationalist-minded philosophers like Fodor is typically on the inferential and logical structure of propositional attitudes, a large part of the real force of propositional attitude ascriptions is comes in explaining and anticipating actions with respect to objects in the world. Ascribing the desire to go to Cleveland to someone is useful at least in partly in virtue of its regular connection with a particular object in the world -- Cleveland.
There are then some at least prima facie reasons to see the systematic and predictive use of intentional explanation as depending in part on the subject's ability to take advantage of regularities in its world (including its social world) which it may not explicitly represent ahead of time. Or to put this in a slightly different way: The idealization about the organism which underlies an intentional characterization of its states may idealize not only on the basis of its current internal state, but also idealize about what will happen on the basis of those internal states, given the organism's ecological situation. So in some pretty interesting and systematic ways, it might seem that the causal powers (at least for the purposes of psychology) of physically type-identical thoughts can differ -- thus violating the "metaphysical point".
Fodor of course resists this sort of suggestion, claiming that its problem lies in the fact that "identity of causal power must be assessed across contexts, not within contexts" [Fodor 1987, p.35] -- that is, by appeal to something like the second clause of the methodological point. To assess the causal powers of two things by evaluating them in different contexts is, he would have us think, cheating: "What is... not in general relevant to comparisons between the causal powers of our biceps is... for example, that in C (a context in which this chair is not nailed to the floor) you can lift it; and in C' (a context in which this chair is nailed to the floor) I cannot lift it. That eventuality would give you nothing to crow about." [Fodor 1987, p.35]
But it seems that Fodor has overlooked some basic facts about the general context-dependence of many higher-level explanatory types. To be an event or object of a particular (higher-level) type often requires occurring in the right context. The position of a given DNA sequence with respect to the rest of the genetic material is critical to its status as a gene; type-identical DNA sequences at different loci can play different hereditary roles -- be different genes, if you like. So for a particular DNA sequence to be, say, a brown-eye gene, it must be in an appropriate position on a particular chromosome. [For a general discussion of context-dependence in the biological context, see Wimsatt 1976.]
In such cases of context-dependence, we see the failure of a kind of parallel to the formality condition. Chemically type-identical DNA sequences can differ in genetic role; physiologically type-identical organisms can differ in fitness and ecological role; structurally identical valves in a complex machine can play entirely distinct functional roles (as with intake and exhaust valves in an internal combustion engine); psychologically identical agents can have different social roles (as with the president and someone who thinks he's the president); and type-identical machine-level operations in a computer can differ greatly with respect to their computational properties (as when, say, "store A at location X" is on one occasion storing the last digit of the sum just calculated, and on another is displaying the next letter on the screen).
The context-dependence of many higher-level explanatory properties of parts of systems is thus a common phenomenon, and one that we should by now perhaps even expect to find in explanations given by the higher-level sciences. But if so, then why should we think that explanatory properties used in our intentional psychology -- e.g. the contents of the thoughts, goals, and knowledge of the subject -- supervene on local structure if these other respectable explanatory properties don't? It can't be that there is some general principle about local supervenience which is "constitutive of scientific practice".
So to return to Fodor's point about assessing causal powers across contexts rather than within them: The critical point to note is that sameness of causal powers is relative to a way of taxonomizing a system. [For a clear and compelling (as well as more detailed) discussion of a similar kind of moral in the case of psycholinguistic inquiry, see Abrahamsen 1987.] For DNA sequences to have the same causal power as objects of chemistry is just for them to have the same (local) chemical structure; but to have the same causal power as genes is roughly to make the same contribution to the phenotype. But what contribution is made to the phenotype is highly dependent on context -- on where the sequence is in relation to the rest of the genetic materials, and on the precise nature of the coding mechanisms which act on the sequences. So the causal power of a particular gene considered as that kind of DNA sequence is supervenient on just its local microstructure, but its causal power considered as that kind of gene is not. (This is probably a slight oversimplification; there may be no unique genetic level of explanation. But for current purposes, this doesn't matter. Pick one such level above the chemical, and the point will stand.)
Now this might not be entirely incompatible with the second clause of Fodor's methodological point. Identity of causal powers might still be thought of as identity of causal consequences across nomologically possible contexts. But what counts as a "nomologically possible context" must be tailored to the level of description we're concerned with. To put the same DNA molecule in a different but nomologically possible context, we need to preserve its local chemical properties. But to put the same gene in a different but nomologically possible context, you need to preserve its properties as a gene; and this places significant constraints on the contexts you can consider -- e.g. one where its context is such that it now codes for entirely different phenotypical properties (even with the same local microstructure) won't count as instances of the same gene.
Similarly, describing our brain states as brain states is describing them in terms of properties which presuppose only the local regularities of physical functioning; the causal relations presupposed do not presuppose facts about external context or external regularities. But to describe those very same states as intentional states may perfectly well be to describe them in terms of properties which do presuppose a context and external regularities, and thus may presuppose differences which do not supervene on the local physical structure. And considerations offered in the preceding section suggest that this possibility is in fact prima facie plausible for intentional states.
To summarize: Higher-level explanatory functional properties across the sciences often exhibit context-dependence, so that the science in question may taxonomize objects with type-identical local physical microstructure differently. In such cases, the object's causal powers as that kind of higher-level entity do not supervene on its local microstructure alone. The prima facie context-dependence of intentional properties might well be taken to indicate that psychological taxonomy works like this. In any case, we've been given no "constitutive principle of scientific practice" which would conflict with this.
The prima facie reasons (given in section III) for thinking that physically type-identical intentional states might be seen as differing in causal powers motivate Fodor to offer further reasons to doubt the anti-solipsistic position. But the analysis I've provided in section IV above provides clear rebuttals to two of Fodor's noticeable concerns here -- the "grotesqueness" and the "madness" objections.
Grotesqueness: As he says, "If this argument [roughly, that of section III] shows that my mental state differs from my Twin's, it's hard to see why it doesn't show that our brain states differ too.... it would be grotesque to suppose that brain states that live on Twin-Earth are ipso facto typologically distinct from brain states that live around here." [Fodor 1987, pp. 37-8] But it's now not hard at all to see how the mental states could differ from context but the brain states not. The critical asymmetry between mental states and brain states may be essentially the same kind of level-relative context-dependence in the case of DNA sequences and genes. An object's causal powers and what they might supervene on depend on the kind of thing we're taking it to be -- on the level of organization and explanation we're considering. Viewing an object as a brown-eyed gene presupposes facts about its genetic role (which depend on more than local microstructure), while viewing it as DNA presupposes only facts about its chemical properties (which do supervene on local microstructure). Similarly, the powers and properties of the brain when viewed chemically (even over some extended period of time) presuppose no external relational properties, but its powers and properties when viewed as an intentional system may well do so.
Madness: Fodor attributes to Ned Block the following puzzle which is taken to suggest the incoherence of taking an ecological property like linguistic affiliation as a determinant of individual differences -- in this case, food preferences. Consider some psychologist interested in the etiology of food preferences:
Now, on the intuitions that Burge invites us to share, Oscar and Oscar2 have different food preferences; what Oscar refers to gruel is brisket, and what Oscar2 prefers is gruel to brisket2.... if she counts their food preferences the way Burge wants her to, then she has to say that there are three sources of variance: genetic endowment, early training, and linguistic affiliation. But surely it's mad to say that linguistic affiliation is per se a determinant of food preference; how could it be? [Fodor 1987, p.40]
Here's how: Oscar might, for example have lots of linguistically mediated beliefs about "brisket" (e.g. thinks "Brisket is much healthier than other meats," "Brisket is more impressive to serve than other meats", "Brisket goes with the rest of this meal very well", etc.) which can play a critical role in his own food preferences. Roughly, if Oscar depends on things outside his head to determine what counts as "brisket" (as, via the linguistic division of labor, we all tend to do), and if some of what determines his preferences vis-a-vis "brisket" are beliefs about this general class (e.g. whether it's healthy, impressive, aesthetically compatible with something else, etc.), then linguistic affiliation may well be a part of his preference -- it may, for example, help us predict what he's going to buy at the deli. The social use of the term "brisket" may be an important determinant in Oscar's behavior vis-a-vis various deli meats, just as the maps and exit markers were an important determinant of his systematically predictable driving-east behavior when he wanted to go to Cleveland.
This is not to say there is no sense to a solipsistic notion of, say, food preference. There are no doubt interesting and systematic facts about food preferences under circumstances where the subjects' information about the food in question is dramatically constrained in one way or another, and in particular where they are not allow to take advantage of information which would be socially available but which they have not internally represented. Such facts about "preferences" are no doubt a legitimate part of psychology. But the relation between these facts and the notion of preference critical to normal intentional-level theorizing about human agents in the social context is far from straightforward. The normal ascription of desires and preferences to agents outside the laboratory is centrally to predict and explain their actions under normal informational conditions -- not under artificially impoverished ones (as in the food tasting case, or perhaps the cases of tachistoscopic presentation of visual displays), nor for that matter under ideal conditions of "perfect information" (as with perfectly rational and well informed voters or market agents). It is an empirical matter whether or not real embedded preferences approximate either those of the circumstance of information impoverished so as to include only bare sensation, or that where information is idealized about and seen as perfect or complete in some way.
This empirical nature of the question is all that I need to answer Block and Fodor's objection here: It's hardly impossible that linguistic affiliation (i.e. social context) turn out to be a component of actual food preference. But furthermore, it seems to me that this is an empirical matter about which we're in a pretty good position to guess. Market agents and political agents typically work under conditions of dramatic informational (and cognitive resource) constraint; and their use of gross satisficing strategies (like using price as an indicator of value) leaves their behavior poorly modeled by an account of them as perfectly rational optimizers. Similarly, the influence of social factors on preferences may well leave them in practice a poor fit to the laboratory case of impoverished information. (If you've ever been in a blind beer tasting test, you know how you're as likely to choose Schlitz as Lowenbrau.)
It's possible that these considerations may leave some of the constrained-information preference data in much the same position some kinds of mistaken stimulus-response generalizations. "Preferences" under conditions of dramatically constrained contextual information may be little more useful and respectable than behavioristic "responses" to linguistic items isolated from any kind of context. It is at least entirely possible that these kinds of "solipsistic" facts about "preferences" are another case -- like that of behaviorism, or that of over-idealization in classical economic theory -- where a methodology generated by mistaken epistemic or metaphysical views has been allowed to ride roughshod over critical facts about a domain.
We can now assess where Fodor's account has gone wrong. He is, in my view, entirely correct in his insistence that "for one thing to affect the causal powers of another, there must be a mediating law or mechanism." [Fodor 1987, p.41] Where he goes wrong is in his insistence that "It's a mystery what this could be in the Twin (or Oscar) cases; not surprisingly, since it's surely plausible that the only mechanisms that can mediate environmental effects on the causal powers of mental states are neurological". [Fodor 1987, p.41] Social and environmental mechanisms of all sorts may well mediate such difference in causal powers, just as the difference in the causal powers of type-identical DNA molecules at different loci is mediated by the external (to the genetic material) mechanisms of genetic coding, or as the difference in causal powers of type-identical organisms can be mediated by the different social mechanisms they may confront.
It's a threatening cloud that Fodor points to; as he says at one point, "As I keep pointing out, if mind/brain supervenience goes, then the intelligibility of mental causation goes with it," since "if supervenience be damned for individuation, it can't be saved for causation." [Fodor 1987, p.42] Now if intentional properties don't supervene on non-relational properties of the brain, and supervenience is thus "damned for individuation", then perhaps it shouldn't be saved for causation either -- that is, then the causal powers of psychological states don't depend only on the underlying brain states either. But this is a long way from losing "the intelligibility of mental causation". The intelligibility of mental causation requires intelligible mechanisms, but some of those mechanisms might well be partly out in the physical and social world rather than entirely within our individual heads. Maybe "God made the world such that the mechanisms by which environmental variables affect organic behavior run via their effects on the organism's nervous system," [Fodor 1987, p.40] but not exclusively so. And furthermore, we have made the social world so that there would be other mechanisms through which environmental variables might systematically affect human behavior.
A final note of emphasis: I haven't argued here that there cannot be an account of intentional behavior in terms of narrow content. Of course, I think that there will be -- at least in principle -- accounts of human behavior which are solipsistic in nature: microphysical, chemical, and neurophysiological ones, at least. But if you have any kind of sympathy for the now-standard suggestion that there is some fairly strong kind of autonomy of levels of explanation and organization of complex systems, these are a long way from an account of us as (fairly) systematically (relatively) intelligent and (sometimes) rational beings. And the current point is that there may well also be scientifically respectable accounts which individuate the causally relevant states of the organism non-solipsistically. The prima facie inclinations to see intentional taxonomy as non-solipsistic could turn out to be overridden in the end; but it should not be overridden in the beginning by any putative "constitutive principle of scientific practice" of which I'm aware.
I've done the local job that I set out to do: I've responded to Fodor's mistaken suggestion that the "metaphysical point" of his version of "methodological individualism" is supported by any "constitutive principle of scientific practice", and his corollary suggestion that all this should incline us toward the acceptance of methodological solipsism about intentional states. But before I stop, let me make a couple of very brief suggestions about the bearing of the current discussion on a couple of related issues: First, its relationship to some of the methodology of current work in artificial intelligence; and second, the relationship of all this to the more general question of individualism and holism in the sciences.
In at least some contrast with cognitive psychology proper, research in AI is much more concerned with the high-level process of figuring out how it is that we solve problems and get around in the world we face at the intentional level -- more interested in the information-processing problems we face and how we solve them than in the details of, say, how we store and retrieve particular syntactically characterized bits of data [see Pylyshyn 1981]. As such, it's not implausible to see the methodology in AI as being perhaps even most centrally the scientific methodology which should be of concern to empirically-minded philosophers who are interested in the intentional-level characterization of human beings. It is here that we see the focus on intentional-level topic such as goals, strategies, plans, heuristics, and the representation of knowledge.
A recent wave in work on planning [see e.g. Suchman 1987, Agre 1988] emphasizes the extent to which planning should be taken as situated activity: Plans are not complete procedures worked out ahead of time, but flexible and indeterminate strategies which are constantly filled in and revised in their context of execution. This allows for the possible avoidance of computationally expensive (and perhaps even impossible) projection, as well as for the exploitation of regularities in the environment which can be discovered along the way rather than explicitly computed ahead of time. After all, an ant makes its complex path across the sand not by mapping the complexity ahead of time, but by using simple and local strategies to deal with the discovered complexity of the terrain [Simon 1981, pp. 63-4] -- much in the way Oscar uses local strategies involving freeway markers to get to Cleveland.
The exploitation of real-world constraints has long been a central guide in artificial intelligence research. But the "situated activity" view of planning takes this a step further by not only explicitly taking advantage of assumed regularities, but by also placing at the heart of planning the flexibility to take advantage of regularities which are not assumed ahead of time and universal, but discovered and thoroughly contingent. A consensus may be arising at this point that planning ahead for details is not how we get around the world (or how clever machines will do so), as such a strategy would make dealing with the complexity of real environments a practically intractable problem. If this view is right, then we are being given a picture of an intrinsically situated activity -- one explicitly specified in relationship to and dependent on the environmental context it confronts. Such a view of the methodology of this part of the informational sciences of the mind would be a natural fit with the view that intentional states as such are essentially embedded states that cannot and should not be fruitfully examined in isolation from their embedding context.
Finally, let me conclude with a brief comment on the larger question of methodological individualism. It's my hope that the kind of analysis of the issues surrounding this topic given here might help to clarify the general question of individualism and holism in some interesting ways. Elliot Sober [Sober 1984] has recently made a valiant attempt to try to clarify how the general individualism vs. holism question might be transformed from a verbal dispute to a real, at least partially empirical question with a solution that avoids the triviality of simplistic reductionism ("only the little individuals matter") or the vapidity of complete ecumenicalism ("any level -- individualistic or holistic -- is as good as another").
The analysis I'm offering here is, I think, very much in the same spirit. First, it may help fill out the idea of how individualism could be false in given cases: The properties at the level that you're interested in might be fundamentally context-dependent properties. And second, it may help bring out what's wrong with the reductionist criticism of anti-individualism (typically in social sciences) which suggests that the view must imply some kind of non-mechanical or occult view of causation -- as Sober puts it, "that you must add some sort of occult social fluid to individuals and their interactions to get social groups." [Sober 1984, p.185] The denial of individualism doesn't imply any kind of occult action at a distance or any rejection of the generality of physics; it implies nothing more cosmic about causation than some facts about context-dependence and autonomy of level which are already a central part of entirely respectable sciences like genetics and computer science.
Perhaps there is a pretty reasonable rule of thumb which those of
us interested in the philosophy of the less-developed sciences
might do well to heed more closely, even in cases when might seem
to rub our first-glance metaphysical inclinations against the
bias: If it shows up in established sciences, don't be so worried
when it shows up in slightly less mature ones. The possibility
of anti-individualism presented by level-relative
context-dependence -- including that which we might find in
intentional causation and explanation -- is a natural part of our
successful scientific practice.
Thanks to Bill Wimsatt, Josef Stern, Richard Rosenblatt,
Lance Rips, Dan Gilman, Stuart Glennan, Rob Chametzky, Phil Agre,
Marshall Abrams, and the editor and referees of Philosophical
Psychology for their contributions to the causal history of this
paper.