Nearly a year ago, I posted a paper (below), to which R. Scott Bakker responded (below, below). We then proceeded to go back and forth a little. Recently, I realised I owed Bakker a response, which I thought I’d prime by first reposting the context. So, here is the paper, and below that is the discussion. My next post will clarify my (new) positions, and offer a response. (Of course, since some time has passed, my views aren’t exactly the same. Still, they are similar enough to warrant a reply.)
Are There Still Philosophical Problems?
Abstract: This paper combines the congruent work of modern biology and Wittgenstein with eliminativism about intentional content and phenomenal properties. It concludes that linguistic content is an illusion. That that is the cause of apparent intentionality in thought. That there are no intentional states and no phenomenal properties. That our current understanding of how language evolves supports the use of justification conditions over truth conditions. And that that entails the dissolution, or reformation, of many of the classical problems of philosophy.
Keywords: Content, Intentionality, Naturalism, Phenomenal, Justification Conditions.
Philosophical problems do not deal with the nature of reality, rather, they are problems of language. Born from language and conferred to thought. The same goes for intentionality: the minds ability to be intrinsically about something. “Paris is in France” is not intrinsically about France. Quine’s inscrutability of reference shows semantic tokens cannot be intrinsically about anything. And if tokens aren’t neither is thought, as intentionality of thought is “a matter of ascription, attribution and interpretation by others” (Rosenberg, 2015). Here modern biology and the pragmatists agree. But therein lies the problem. When I say “France” you know what I mean. You can picture certain parts. Think about them. Or perhaps it just seems that way. The proposition ‘Paris is in France’ is as much about Paris as the mathematical proposition ‘2 + 2 = 4’ is about four objects, or about them any less arbitrarily than ‘2 + 2 = 5’. On the face of it this seems unlikely, the former alone accords with our intuition. Which is partly due to the necessity for cultural justification and the conceptual problems inherent with describing vis. a language pregnant with dualistic artefacts. We are misled by language to the assumption that ‘2 + 2 = 4’ accords with reality and ‘2 + 2 = 5’ does not.
In the former a person counts a set of objects, and in the latter, they are included in the set. Both are equally valid rules for interpreting the environment and are only two of indefinitely many. One cannot say that it is how a proposition ‘fits’ reality which makes it intentional, as this mistakes conceptual application for perception. The mathematical proposition only applies to the world in the way community practice dictates. Take, for example, the litmus test for intentionality which is a failure to substitute co-referring terms… or opacity. It is possible that this is a differential on the application of a rule rather than a mental state. Consider the following:
Daniel looks through the bars of a window. He notices a house and a woman in rags. Unbeknownst to Daniel the woman is the owner. He believes:
P1. That house is ‘II’ high and ‘III’ wide.
P2. Her house is not.
P1. TH ⊃ II ∧ III
P2. HH ⊃ ¬II ∧ ¬III
These terms share the same referent but have different modal values. Daniel’s propositional attitudes are opaque; one cannot substitute the co-referring terms. Daniel take two steps back… looks again and sees that the house is no longer ‘II’ high nor ‘III’ wide. The modal values realign:
P1. TH ⊃ ¬II ∧ ¬ III ≡
P2. HH ⊃ ¬II ∧ ¬III
and opacity becomes translucent. What happens to the brain when intentionality dries up?
The phenomenal aspect of aboutness is intuitively appealing because of its method of propagation contra language: for thousands of year’s language developed alongside the contention that mind and body were separate. But if ‘belief’ is not phenomenally dependent (Wittgenstein, 1953) what does it mean to say: “Daniel believes ‘x’”?
I can imagine him saying “I believe ‘x’” and that not being the case. I can also imagine him thinking “I believe ‘x’” and that not being the case. So, when is it right to say he believes ‘x’? If not by virtue of what he says, thinks or feels… what distinguishes Daniel’s state of believing as opposed to knowing must be a characteristic of his behaviour. I know Daniel believes ‘x’ when he acts in such and such a way. His propositional attitude is an “indirect measurement” of “reality diffused in the behavioural dispositions of the brain and body” (Dennett, 1991). Our inclination to conceive this as otherwise is not surprising, but indicative of our susceptibility to the metaphors of dead philosophy i.e. ‘seeing’ in relation to memory.
Here it is important to stress the effects of social agreement in the development of the illusion of content:
“According to [pragmatists], we can only make sense of contentful thinking in the context of shared ways of life in which social norm compliance is developed, maintained and stabilized through practices. Such practices are not only based on our shared biology but in social engagements and cultural devices that evolved over time, especially linguistic tokens, the primary bearers of semantic content.” – Hutto and Satne.
Which begs the question “how do linguistic tokens evolve absent of content?”
Cloud’s model of Skyrmsian semantic growth can be paraphrased into the following: Within a (system) society of insects, where the individuals have it in their best interests to cooperate completely, it seems likely that random attempts to convey something coupled with random attempts to respond (given that suitable or unsuitable responses are positively or negatively reinforced respectively vis. Darwinian filtration)… would eventually give rise to a complex symbolic language. Rather like the language some insects do have- and this does not require any intrinsic aboutness; the common theme is societal dependence. What appears as content comes up through normative compliance as regulated by social and cultural practices. Naturalism can help itself to a great deal of this type of ‘content’ without pinning it down.
Content is as physical as the rules of chess, a cultural creation formed by convention and a Darwinian necessity for cooperation.
“[On beliefs] In contrast with other intelligent dealings with the environment, thesse content-involving practices contain a special sense of going wrong: this is not just what is acceptable for a community but being correct or incorrect according to how things are anyway. These practices differ essentially from ways of dealing with the world that do not represent it.” – Hutto and Satne.
Here the ‘special sense’ of going wrong amounts to a conceptual mistake. Whatever justifies the use of one rule over another must be non-representational (Millikan 2009), and so cannot appeal to the extension of semantic properties contra reality. The belief that ‘2 + 2 = 4’ is in a special sense right and ‘2 + 2 = 5’ is wrong is due to the fact we conceive ‘2 + 2 = 4’ as fitting reality in a distinct manner. The problem is we cannot justify this assertion using truth conditions. Wittgenstein’s justification conditions, however, which rests on similar grounds as Hume’s problem of induction, can cure the itch. There is no fact which justifies my use of ‘plus’ rather than ‘quus’ except social justification from community values and practices. The justification for my using ‘plus’ rather than ‘quus’ is in virtue of an arbitrary convention. I am taught the rule ‘plus’ and so cannot easily imagine another in its place. Context and conceptual application cross with a linguistic disposition towards dualism to insist on a preferred interpretation- this is the point of the wood-seller in ‘Lectures on the Foundations of Mathematics’.
The inclusion of a grammatical subject when referring to phenomenal properties is one example of how language channels our inner Cartesian. Its application in metaphysics is another. If a person asks the question: “How does a map fit the world?” The answer is obvious: “It pictures it.” But when I look at the landscape that’s not what I see. The rule transforms the picture. If a rule is taught well one does not see its application. It simply applies. Here is the point: If for a major portion of language development, social norms required the linguistic separation of mind and body. It is reasonable to assume that our language, and so are thoughts, contain predispositions and intuitive susceptibilities toward (a picture) one type of description.
When David Chalmers argues for the conceivability of an exact physical duplicate without consciousness, what he is describing is dualism. The existence of a mind as independent from its physical hardware is a logical factor of his first proposition. Which amounts to conflating the conceivability of a computer working without electricity and its probability a priori. Chalmers’ thought experiment makes more intuitive sense, but that is an observation about language and the history of thought rather than reality in any interesting sense.
I can conceive such a thing, but that means nothing.
“One is tempted to say, “A contradiction not only doesn’t work it can’t work.’ One wants to say, “Can’t you see? I can’t sit and not sit at the same time.” As one says “I can’t talk and eat at the same time.” The temptation is to think that if a man is told to sit and not to sit, he is asked to do something which he obviously can’t do.” – Wittgenstein
Rosenberg, A., 2015, “The Genealogy of Content.”
Hutto, D. and Satne, G., 2014, “The Natural Origins of Content.”
Cloud, D., 2014, “The Domestication of Language.”
Millikan, R., 2009, “Biosemantics.”
Dennett, D. 1991, “Real Patterns.”
Kripke, S., 1982, “Wittgenstein: On Rules and Private Language.”
Quine, W., 1960, “Word and Object.”
Wittgenstein, L., 1953, “Philosophical Investigations.”
Wittgenstein, L., 1956, “Remarks on the Foundations of Mathematics.”
In response to this paper, R. Scott Bakker writes…
Begging the question, ‘What are norms?’ I’m not sure what’s gained by substituting one set of supernatural entities for another. Wittgenstein, I fear, simply shifts the conundrum from intentionality to normativity. Either way we find ourselves lacking any real empirical recourse.
The fact is, meaning does an enormous amount of actual work–it is itself a natural phenomenon. It stands to reason that genuine theoretical cognition (as opposed to philosophical speculation) regarding its nature will require science, the same as any other natural phenomena.”
Firstly, forgive the late response; my phone was broken and I’ve been working more often than usual. I’m baffled by your use of supernatural regarding normativity. Perhaps, it is worth mentioning precisely what you mean, so I can respond properly? Shifting intentionality to normativity cuts out such usages (“supernatural”) in any serious sense. That is the point. The metaphor you use cannot describe anything. Not the propositions, nor the reality they apparently fit via., conceptual application.
“Meaning does an enormous amount of actual work” is a natural assumption, though could be otherwise. The Humean inversion springs to mind, and also this anecdote:
Wittgenstein: “Why do people always say, it was natural for man to assume that the sun went round the earth rather than that the earth was rotating?”
“Well, obviously because it just looks as though the Sun is going round the Earth.”
“Well, what would it have looked like if it had looked as though the Earth was rotating?”
You say, “[Meaning] is itself a natural phenomenon. It stands to reason that genuine theoretical cognition (as opposed to philosophical speculation) regarding its nature will require science, the same as any other natural phenomena.”
But, for this to be coherent, you first need to be able to cash out content scientifically (or at least be on the right track). And as far as science is concerned; there isn’t any. Well, not that I’m aware of (biologists, please raise your hands if I’m wrong). Trading intentionality for normativity cuts out object correspondence, making logic relative (and why wouldn’t it be?) It clears the rubble of the ancient artefacts of language, which congests our thought. And it runs parallel to Darwinian theory, as opposed to Decartes, and so, can be included with modern theories such as teleosemantics. And with those theories it shows, not what we all think we should find if only we lookhard enough. But that we are looking in the wrong place.
I look forward to your response, and to you defining some of your terms, so we can really get in depth next time. But thank you for your response; it was (and is) certainly worth thinking about.
I think it likely that shifting our theoretical discourse from an intentional idiom to a more narrowly normative one will leave the field suffering the same underdetermination as always, which is to say a science forever stranded with fulltime philosophers. I think I can most perspicuously illustrate what I mean by explaining why.
No one has been able to naturalize normativity for much the same reason they haven’t been able to naturalize intentionality (or even phenomenality, for that matter). Cognitive neuroscience, let alone cognitive science in general, remains a theoretical mire, so for me, the operative assumption behind all *naturalistic* attempts to solve the interminable puzzles is finding some way of fighting clear interminable philosophy.
Here’s a possible way: The fact is, all intentional cognition is heuristic cognition, ways to make sense of our environments absent mechanical information. If brains are indeed some kind of difference minimizing or Bayesian engines (and they almost certainly are), then all we have access to are cues, information systematically related to what’s going on. We are everywhere confronted with the inverse problem, the dilemma of somehow tracking systems, physical systems, via the sensory *effects* of those systems. We are, quite famously, stranded with correlations, many of them learned, many of them evolved.
Normative cognition is heuristic cognition, a way to cognize environments via differential cues, deriving powerful efficiencies *at the cost of applicability.* As far as I can tell, this is the case. So the question the Wittgensteinian or any other normativist (or traditional philosopher of any stripe) has to answer is: Why think normative idioms possess theoretical applicability? Why think normative cognition, which leverages it power by neglecting what’s really going on and fastening onto cues, possesses the capacity to theoretically solve normative cognition (tell us what’s really going on)?
The observation that a normativist turn would simply strand naturalism with new versions of all the old conundrums is almost a platitude as it is. The heuristic account allows us to turn our back on the whole mess, see it for yet another ‘crash space,’ a register where we blindly misapply our tools simply because we have no metacognitive inkling of their heuristic limitations, let alone their actual structure, dynamics, and provenance. It can explain why normativism seems to solve problems, yet consistently remains underdetermined, and so solves nothing in the end.
The question really is quite a devilish one. Why should we expect a system built to ignore what’s going on to tell us what’s going on?
I’ll work backwards, starting with a few minor quibbles. The system was built to help us survive, not to let us know what’s going on in any precise sense. The problem with heuristic cognition is: it works well short term, but not as well long term (nature famously enjoys, cheap, dirty and quick solutions).
Further, system itself seems an loaded metaphor to use, like ‘machine’ or ‘mechanism’ it cuts out my argument by enforcing the current conception of what we mean when we say ‘understanding’, ‘grasping’, ‘process’ and so on. But I think the main problem with your response is… that it is not one mind normativity speaks about at all; justification conditions require a majority of minds to hit target applicability. (Think of a crowd which sings in tune, though each soloist does not.)
This leaves me open to the reductio: “if the majority doesn’t agree you’re ipso facto wrong!” Of course, but our views are thrust upon us most of the time; genetic, societal, or otherwise. Probably because what the majority thinks, has a higher probability of fitting reality in a way which is useful. Both for survival and for procurement of information (in that order of importance).
It’s a hard question isn’t it! Every normativist I’ve asked has punted, so far at least. But it really is straightforward, and as the research continues to isolate and decompose the systems involved in intentional cognition, the more difficult it becomes to dodge.
You try to side-step the question here by assuming that heuristic cognition has no purchase on the collective nature of ‘language games’ (or the ‘game of giving and asking for reasons’ if you prefer a Sellarsian normative metaphysics). But this is simply not true. In fact, it’s the heuristic nature of intentional cognition that’s responsible for its reliance on an enormous field of background features. This is simply a natural feature of heuristic cognition, one requiring no (perpetually underdetermined) normative metaphysics to fathom.
Give it another crack: Why should anyone think normative cognition, which (as a matter of empirical fact) solves by ignoring what’s going on, possesses the capacity to tell us what’s going on with cognition more generally?
“If everybody always thinks that this sort of thing is money, and they use it as money and treat it as money, then it’s money… [But] If “money” implies “regarded as, used as, or believed to be money,” then philosophers will get worried.” – Searle
What makes the justification for the proposition “this is money” different from the truth of the mathematical proposition ‘9 x 9 = 81’?
We see from the work of Halbach, that object correspondence is required for the logical structure of a theory of truth in the latterly sense. This type of intentionality cannot be cashed out with hard science, as first it requires an answer to this question: “how can a physical thing, be about, directed at, point towards, another physical thing?” And no amount of gerrymandering fits that oversized foot into the boot of epistemology.
Nothing but our intuitions regarding phenomenal properties, the applicability of societal creations (mathematics and physics, in particular) and our historical preference for dualistic metaphors make the intentional stance appealing. Yet, applicability can be handled without object correspondence, through conceptual application, teleosemantics and normative cognition contra language. And Wittgenstein and Quine deal with the realiabity (or lack thereof) of the phenomenal.
Here is where I bite the bullet. Initially, I was asked why I chose normativity over intentionality, when both are “supernatural”. After a discussion about the differences between the former, which requires a majority of actors giving through normative cognition and language, a mean aboutness, geared towards survival rather than clarification through Darwinian selection. And the latter, a scientifically unsupported, dated, though intuitively appealing, single agent aboutness. The question evolved, rather interestingly I must admit, into the seemingly essentialist challenge: “Why should anyone think normative cognition, an approximation, can give a non-approximate answer about cognition?”
Perhaps, whilst one approximation may be wholly unreliable at worst, and incapable of fine differentiation at best… the Bayesian system (as you call it) adapts. One system adapting is no big deal- billions of such systems, however, constantly correlating and amassing exponentially? Perhaps, that is how such a system (network) of adaptive approximation might give a precise, though non-essentialist, non-intentional, conceptually applied answer to your question!
If, as you say, “applicability can be handled without object correspondence, through conceptual application, teleosemantics and normative cognition contra language,” then please show me what these interpretations have ‘solved’ aside from problems only some fraction of philosophers worry about to the satisfaction of some even smaller fraction?
As for the networked billions, things like social physics are already engineering some pretty impressive feats. The age of Big Data is just beginning, and the well of actionable surprises promises to be as endless as we are diverse. Our freedoms are about to be hoovered, such is the power they promise. Normativism, meanwhile, organizes conferences where everyone gets together to laugh, grouse, and bicker.
Dude, we are already being sliced and diced and dissected. The evidence just keeps stacking up. Whatever’s going on is a whole lot weirder than the 20th century could imagine. We gotta climb out of leaky boats like these if we want to move forward.
All of which means the question still stands. A description of a system doesn’t make that system evidence of your description. ‘Language games,’ ‘intentional stances,’ ‘norms’–these things are hopeless ways to theoretically cognize intentional idioms and intentional cognition. The proof lies in their inability to do anything more than warehouse more claims that cannot be sorted. The perpetual underdetermination of normative theory, meanwhile, is exactly what a negative answer to my question predicts. Normativism itself evidences the futility of normativism, the fact that adducing some normative metaphysics, some list of theoretical entities ‘not found in the book of nature,’ as Brandom puts it, just explores a different corner of metacognitive crash space. Swapping one set of ‘inexplicably efficacious irreducibilities’ for another simply gets us more philosophy. And when it comes to cognitive science, the dependence upon philosophers is precisely the problem, the mark of confusion, an absence of reliability and rigour.
So I just don’t see how your normativism counts as naturalism, asserting, as it does, the existence of superordinate, supra-natural efficacies, ‘pragmatic functions’–and on speculative grounds alone, no less. There’s just no knowledge here, only structured controversy, so I can’t see how it bears positively on the scientific project, which is to say, counts as naturalism. It’s naturalism+, or as I think it will eventually come to be seen, residual supernaturalism.
Sounds harsh, I know. But we’re running out of time and we need to figure out what’s really going on, and only science has been able to do that. Putting a hat on it gets us nowhere.
They ‘solve’ (dissolve) the mind-body problem which, as far as dualism goes, is so heavily entrenched in everyday thinking (not just “some faction of philosophers”), that people commit themselves to metaphysics they are wholly unaware of, making them susceptible to all sorts of nonsense. And what I mean by that is, when a person says “I’m thinking of music”… “I saw in my dream”… “mental health is as important as physical health”… they are “filling in” areas they think must contain something. The point of Wittgenstein’s Beetle is to illustrate precisely this point.
In regards to social physics; promised power ought always to be taken with a pinch of salt.
I’m not simply in Wittgenstein’s leaky boat. He, for one, thought there had to be content. He also had no access to modern science. I’m building a makeshift boat out of the parts which still seem sturdy.
I’m not trying to theoretically cognise intentional cognition! I’m saying it’s an illusion. There is no evidence (if you have any, now is the time to say) that intentionality exists. No intentionality means no content. What I’m saying fits in with this. If there’s no content, the applicability of mathematics etc., becomes a serious problem for us naturalists. But with borrowings from pragmatism, Witt., Quine and Millikan, we can explain this as conceptual application, making up for the lack of truth conditions, stemming from no content, with justification conditions, a portion of verificationism; and normative cognition conflated with conceptual application. I see no supernaturalism here.
Finally, let me ask you a question: how on earth do you propose to cash out anything like intentional cognition scientifically?
It just strikes me as crypto-mysterianism, I fear, words calculated to assure the perpetual relevance of philosophy to a domain requiring ways to escape it. You need to show me how the use of normative metaphysics to redress the problems arising out of representational metaphysics, enables the *scientist* (because who gives a damn about philosophers) to naturalize *anything*. So long as your position contributes nothing substantive to the science over and above what less ontologically extreme positions (those requiring no books *beyond* the book of nature) contribute, it can and should be ignored.
My blog is basically a scrapbook filled with ways of naturalizing intentional cognition. The key, I think, is to recognize the inapplicability of intentional cognition (in either its representational or normative guises) to the question of the nature of intentional cognition. In short, to understand the hallucinatory nature of philosophical reflection, the likelihood that the bulk of conundrums paralyzing philosophical discourses such as normativism or representationalism are actually artifacts of heuristic neglect, the fact that cognition is utterly blind to cognition, so much so it regularly runs afoul fluency and availability effects. The key is to unlearn as much traditional philosophy as you possibly can.
Skepticism is key, in other words. I’m actually quite fond of the skeptical Wittgenstein (I was all set on writing my dissertation on him at one point), and one of the things I like doing is naturalizing his more cryptic claims. He understood that behaviour is key, but lacking cognitive neuroscience, he had no way of conceiving that behaviour mechanically, and so (given fluency and availability) ran with intentional idioms, and so repeated the misapplication errors of representationalism in a new philosophical register.
I have read your article, and also the majority of the books you reference. Scientism, as Alex Rosenberg attempts to show, is not an intrinsically bad word. I agree! Your argument is well written, and parts are convincing. But it doesn’t cash out anything like content, let alone the truth conditions which rely on said content. And therefore, leaves itself open to what Alex (Geneology of Content, 2015) thinks is one of our biggest problems: Without a scientific account of content, there can be no method of cashing out truth. If your theory cannot accomodate truth, (and you are against justification), how can you think it true?
Nor do you deal with Quine, Witt., etc.,. In fact, you simply “chuckle” at the latter. But, for now, let’s see if you can answer my question.
(It was a good read, nonetheless.)
I definitely agree that the challenge facing eliminativism is abductive, but I think that pertains to the bulk of intentional phenomena, not content in particular. (I call it the ‘Birds of a Feather’ argument: there seems to be more than a little chicanery involved in the way we police the intentional bestiary, which empirically impenetrable phenomena we eschew (God, magic, intrinsic value, ghosts) and those we claim necessary conditions of this or that. I think the sheer opportunism of the myriad local eliminativisms you find out there says a lot about intentional phenomena in general.)
I don’t think content is especially problematic, only especially significant, given the way the astronomical complexities of cognition cue heuristic problem-solving modes. We dwell in ‘black box’ environments, effectively lacking, thanks to the inverse problem and the complexities involved, any behavioural sensitivity to the *mechanics* of our environments, including ourselves. We are literally adapted to make due in shallow information ecologies. We use cues differentially related to various target systems to organize a variety of reliable enough behavioural responses to those systems (once again, including ourselves). Like frogs we aim at floating black dots and hit flies (or fish hooks).
So when we talk to our mechanic we ascribe her ‘knowledge,’ rather than evolutionary and personal histories (which we can’t access but can take for granted), something that allows us to cognize her particular capacities on the cheap, much as we do when we presume money has intrinsic value. (The science is just beginning to catch up with all this by the way: Aside from pretty much everything coming out of the ABC Research Group, I heartily recommend Cimpian and Saloman’s piece on intrinsic heuristics in BBS a year or so ago).
The reason this all strikes philosophers as so difficult, why there’s a ‘hard problem of content,’ resides in the fact that metacognition (and therefore philosophical reflection) operates in the most shallow information ecology one could possibly imagine, one that is heuristic in the extreme, as it has to be given the astronomical complexities tracked. ‘Truth’ is a heuristic means of coping with neglect, a way to communicate a low-resolution approximation of a high-dimensional (mechanical) relation (reports to situations) that we have no way of cognizing. It’s the floating dot (or hook) absent any capacity to intuit either the dimensions neglected, or the limits this imposes on its applicability. So we presume we have all the information required to understand fly (or fish hook) catching, when we don’t. Since truth-talk solves in practical contexts (adaptive problem-ecologies) we assume Truth has to be efficacious. Since truth-talk solves by neglecting mechanical information, adducing mechanical information, not surprisingly, renders it unintelligible. We convince ourselves that only intentional idioms possess the ‘conceptual resources’ required, that we are *forced* to adopt some kind of normative or representational metaphysics. We begin laying out our theories, form traditions where the high-dimensional existence of content (as a thing or a function) cannot be denied. Thus begins the interminable application of shallow-information tools to deep information problems.
The heuristics and biases programme in cognitive psychology shows this same effect across a wide spectrum of problem-solving contexts: We regularly confabulate. Our ability to generate stories we find convincing depends on fluency and availability, and the less information the better. Traditional philosophy will be increasingly diagnosed along these lines, I assure you.
The idiom of truth is a great way solve practical communicative problems on the cheap. Philosophical truth is a crash space, plain and simple. My position not only provides good empirical reasons to set aside Rosenberg’s (and Stich’s) worries regarding truth and eliminativism, but also provides a naturalistic framework for understanding why Truth should have got us so rattled in the first place.
Longwinded, I know. But I thought I would bank some karma (reciprocation bias) while waiting for your response to my question! Why trust what are so obviously shallow information tools?