Scott Bakker’s article, Necessary Magic, is a trenchant rejoinder to my article, Scientism and the Artistic Side of Knowledge, (SASK). In the following response, I’ll try to clarify some of the relevant issues in our
discussion and then I’ll address the central points of disagreement. As
indicated by this article’s title, I think that, a scientistic interpretation
of cognitive science notwithstanding, BBT’s mechanistic self-image is
consistent with a transcendental interpretation of how we appear to ourselves
through introspection. We are not factually
what our intuitions say we are, but that matters most to those who assume a
scientistic conception of knowledge. If we act as good mechanists and ask what
the intuitive self-image is efficient at doing, we should be led to agree with
Scott when he says that that self-image is a lie. So we’re good at lying to
ourselves and indeed we’re naturally built to do just that, perhaps because we
can’t stomach the natural facts. We retreat to the matrix of illusions, as it
were, and because scientists are bent on discovering the underlying facts, we
could use a strategy for heroically dealing with both perspectives, since both
seem inevitable for machines like us. That’s where aesthetic, ethical, and
existential standards can come into play, and so I think Scott’s project and
the philosophy I call existential cosmicism are largely harmonious.
Scientism and Transcendentalism
Now, then, to the preliminaries. SASK was motivated by the debate between Scott and Terence Blake. Blake contrasted scientism with pluralism,
and I was interested in how far the scientistic line can be pushed, so that’s
why I wrote about scientism in the context of BBT. But is BBT scientistic or
not? “Scientism” has a nonpejorative core meaning, but also pejorative
connotations. According to the core definition, scientism is the belief that
the sciences are the only disciplines that supply us with knowledge. Scott says
that “humans are theoretically incompetent, and that science is the one
institutional prosthetic that clearly affords them some competence.”
This seems scientistic in the core sense, although he also says that true
claims can “drift about” in nonscientific philosophy. So if “scientism” is
tweaked to mean that science is the only reliable
source of knowledge, Scott’s view is scientistic, for whatever that
nonpejorative characterization is worth.
The reason the word is usually read as pejorative, though,
is that philosophers have reached some consensus that scientism refutes itself.
After all, scientism is a philosophical rather than a scientific proposition.
Just ask yourself, then, whether the claim that science is the only reliable
source of knowledge is itself reliable. If not, we needn’t trust that all
knowledge comes from the sciences, and if so, we have the paradox of knowledge
that comes reliably from a nonscientific discipline (philosophy). Either way,
scientism is unstable. So is BBT scientistic in this pejorative sense? This
raises the issue of presuppositions which is perhaps my main point of
disagreement with BBT.
Scott argues for the exclusive reliability of scientific
knowledge, by way of induction:
scientific methods have been the only ones to lead to cognitive progress,
whereas many intuition-based myths and prejudices have fallen under the
scientific blitzkrieg. But as the philosopher David Hume showed, this kind of
simple inductive reasoning, which generalizes on the basis of particulars,
isn’t entirely rational, or strictly logical. This reasoning rests on faith
that the future will be like the past. That assumption goes beyond the data. As
Hume put it, we form the “habit” of mentally connecting sensations, in
something like the way a dog or a mouse is trained to think in restricted ways,
to project patterns onto the stimuli. Immanuel Kant went further when he said
that the mind has innate ways of thinking, so that self-knowledge can be
nontrivially indubitable (or “synthetic a
priori,” as he put it). If we instinctively trust that the future will be
like the past, for example, we’ll tend to reason inductively, just as if we’re
prone to numerous cognitive biases, as cognitive scientists have shown we are,
those biases will colour our conception of the world. I’ll call this thesis
that we have knowably-innate ways of thinking transcendentalism.
Now, at first glance, BBT is transcendental since BBT says
the mind is made up of mechanisms that aren’t (yet) controlled by us. We think
in the way they cause us to think, although scientists have found ways to get
around some of those mechanisms, or at least to add some methods to them. But
where Scott differs from the transcendentalist is that the latter bestows the
title of “knowledge” on some of the fruits of innate mental processes, whereas
Scott maintains that we’re theoretically incompetent, that intuitive
self-knowledge is unreliable because our innate ways of thinking are bound to
mischaracterize what’s going on, to work with illusions rather than reality,
with mere artifacts of the limitations of those mental processes rather than
with the mechanistic underpinnings. This is a crucial question, to which I’ll
return soon: When our innate mental processes--which Scott and cognitive
scientists generally think of as mechanically implemented heuristics or
neurofunctions--are turned on themselves, without the benefit of scientific
testing, is their output in any way useful?
It’s worth pointing out, I think, that strictly speaking,
Kant would agree that outputs of innate mental processes are artifacts of those
processes and even that they’re “subreptive” (misleading, deceptive). That’s
why he distinguished between appearances and unknowable reality (phenomena and
noumena). We can resort to the Matrix metaphor and say that we live in a world
of superficial appearances that’s partly constructed by the way we’re built to
think, by our modes of perception, primitive concepts, cognitive biases,
intuitions, irrational leaps of faith, and so forth. Even when science investigates
the real world, we’re bound to apply our innate thought forms to it to some
extent and thus to humanize what’s alien to us, turning the noumena into
phenomena. Scott says, though, that scientific knowledge of natural mechanisms
isn’t significantly tainted by any such dubious, nonscientific biases, and so
scientific self-knowledge ought to replace our more native floundering. The
question, then, of whether our nonscientific attempts to know ourselves are at
all competent, reliable, or otherwise worthwhile depends on what’s going to
count as knowledge.
Is BBT Scientific or Philosophical?
Before I pursue those key questions, I want to clarify a
couple more preliminary issues. Scott chastises me for engaging in mere
philosophical speculation, whereas what the critic of BBT needs, he thinks, is
scientific evidence “that accurate metacognition is not only computationally
possible” but “also probable.”
This raises the question of whether BBT itself is scientific or
philosophical. I think the answer is that it’s both. The scientific part of BBT
is the hypothesis of how certain heuristics work in the mind. In his summary
article, The Crux, he
lists four theses that make up BBT. I’d say that the first two, which he calls
trivial, are the scientific ones. They are that cognition is thoroughly
heuristic and that metacognition and cognition are continuous, which is to say
that the difference between the inner and the outer environments isn’t
Cartesian or metaphysically substantial and therefore that we can know about
either in only similar ways. But BBT also has a philosophical side, which has
to do with its interpretation of those scientific facts. So the other two
theses, which he says are more controversial, are the philosophical ones. These
are that “Metacognitive intuitions are the artifact of severe informatic and
heuristic constraints. Metacognitive accuracy is impossible” and that
“Metacognitive intuitions only loosely constrain neural fact. There are far
more ways for neural facts to contradict our metacognitive intuitions than
otherwise.” These philosophical propositions and their implications are the
ones that are consistent with scientism, that imply that folk psychology is
full of errors, and so forth.
Now, I defer to Scott and to cognitive science in general
when it comes to showing how our mental processes are implemented by neural
mechanisms. A disagreement on those grounds would indeed call for scientific
evidence, but my article doesn’t take issue with that side of BBT. The
disagreement is on the philosophical issues, and here scientific evidence isn’t
likely to be decisive one way or the other. The question of how we should
interpret intuitive self-knowledge, given that the mind is a natural entity, is
going to turn on analysis of concepts like “knowledge” and on whether new
concepts can be developed partly by philosophical artistry (speculation) to
wrap our mind around the kind of conflicting evidence that makes for a
philosophical problem in the first place. So the fact that BBT has a scientific
side doesn’t mean that a legitimate criticism of BBT must be scientific. When
philosophy’s on the table, we play by philosophy’s rules. Scott will say there
are no such rules, according to BBT, because in so far as philosophy is
nonscientific, philosophers appeal to intuitions and intuitions are
systematically misleading. But if I’m right, that half of BBT is philosophical,
we’re back to the questions of self-refuting scientism and of transcendentalism.
Mechanisms and Biofunctions
There’s one last preliminary issue I want to address. Scott
talks a lot about mechanisms, but
instead of defining the word he wants to leave the matter open to encompass
future scientific work, since he derives the term from cognitive science. A
mechanism isn’t just a string of concatenated events or even just any causal
relation; rather, a mechanism is a series of systematically coordinated causal
relations that form something like a machine. In biology, the machines are built
by natural selection. A machine has parts that work in tandem to produce the
whole system. Scott tells me he endorses the philosophical work on mechanisms
by William Bechtel and others called the New Mechanists, and they define
“mechanism” in terms of a hierarchy of capacities deriving from the structural
features of a system’s components, which capacities together produce some
phenomenon that’s explained in such mechanistic terms. So a biological machine
is really just an assemblage of mechanisms within mechanisms, whose capacities
or functions are carried out because of the physical properties of the parts
and sub-parts of the system and because of how those parts are organized.
If this is the kind of mechanism that’s relevant to BBT,
though, I think there’s a problem with Scott’s rhetorical question, “And if its
mechanisms doing all the work, then what work, if any, does normativity qua
normativity do?” The problem is that if semantics, normativity, and the
entirety of our intuitive self-image derive from neuromechanisms doing all the
work, there’s no mechanistic basis
for saying that these results of their work are bogus or mere artifacts in any
pejorative sense. Consider a printed page containing text but also smudges or
other extra marks which are artifacts of flaws somewhere in the copier’s
mechanisms. The distinction here between the mechanistic function and the
artifact, or between the function and the malfunction, is based on the presumed
intention which causes the copier to be built in the first place. We want text,
not the smudges.
And all we have in the case of biological mechanisms are
natural selection, genetic drift, and the like. So the question becomes one of
whether the so-called illusions, subreptions, or artifacts of the manifest
image, namely semantic, normative, and other such judgments, have an
evolutionary role. If so, they’re legitimate functions of our neural
mechanisms, not malfunctions, whatever some of their accidental harms to us
might be. Even if these functions are exaptations, or the results of
trial-and-error tinkerings by our snooping forebears, there’s no mechanistic
basis for favouring some biological effects over others, as long as the effects
are mechanically produced. (This is just Terence Blake’s point about pluralism.)
Certainly, a semantic (truth-centered) or a normative reason for saying we
should approve of how the brain processes external stimuli, but disapprove of
the brain’s processing of itself, is off the table for anyone who thinks those
judgments are bunk.
Now, Scott maintains that BBT shows how our neural
mechanisms break down when applied to themselves. The brain is blind to itself
and that’s why we shouldn’t trust what the brain says about itself, by itself.
This is why intuitions are neural malfunctions, outputs which lead us astray
from the reality of human nature. The
reality is that, contrary to popular opinion, we’re machines (mechanical
systems), not people. Our intuitive self-image is a distortion caused by
the overreach of our cognitive faculties. These faculties evolved to process
sensory information, which is why the sensory connections that take up so much
space in the brain, especially the visual ones, end in sense organs that point
outward, at the part of the environment that contains food and potential mates
and threats, leaving us in the dark about the self behind the curtain. At best,
though, this means that the intuitive self-image isn’t likely factual, or as Scott says, “accurate.”
Still, that image can have other uses, in which case I see no mechanistic
reason for casting aspersions on this product of neural mechanisms. And the
transcendentalist may be able to live with that.
Postmodern Relativism
What I’m saying is
that a mechanistic view of the self doesn’t preclude a transcendental interpretation
of the manifest image. A transcendentalist would be happy to concede that
our innate mental faculties are naturally produced. We evolved to think as we
generally do about ourselves and about everything else. That past is full of
accidents, including genetic mutations which are instrumental to natural
selection. So if the mind is a hierarchy of heuristics running on neural
modules and those heuristics mechanically,
reliably produce the manifest image when turned on themselves without the
benefit of scientific oversight, so be it, says the transcendentalist. That’s
just a scientific confirmation of what we already know, which is that the naïve
view of ourselves is stable.
But is this intuitive self-image True? Are we really as
free, rational, conscious, or precious as we like to think we are? This is like
asking a slave of the matrix why she doesn’t give up the world of illusion and
start living in the real world. The reason she doesn’t is that her brain is
plugged into one world and not the other. If you pulled that plug, inserted it
into a different world-generating machine, and filled her head with a
qualitatively different set of experiences, she might come to think that her
time in the matrix was indeed shallow. But as long as we’re bound by our
present hardware limitations, including the fact that the brain is natively
blind to its mechanical nature, we’re going to treat the so-called illusion of
our intuitive self-image as real enough
for practical purposes, just as the people who are hardwired to perceive
the matrix will do their best to live in the terms set by that program. This is
all just standard transcendental philosophy.
However, Scott will say that science is in the business of
unplugging us from the world of illusions, of showing us what the movie calls
“the desert of the real” (taking that phrase from the postmodern philosopher
Jean Baudrillard). We bounce back and forth between two worlds, switching
between the commonsense and the scientific views of the self. The latter shows
us the mechanisms hard at work producing the intuitions we take for granted,
and also tells us why our intuitions are mere caricatures compared to the
scientific masterpiece. When we personalize our mechanical systems, we sketch a
low-resolution caricature that distorts our inner reality, at best. Science
alone tells us what we really are and that threatens the naïve viewpoint,
because we can no longer hide from science. The AI archons are coming to drag us out of the matrix, to horrify us
with a vision that contradicts our feel-good myths, so that we’ll have to
choose to accept science’s dehumanizing theories or to lie to ourselves.
The existential apocalypse is coming, because the folks with white coats and
pocket protectors are coming for our ego and when they get hold of it, undoing
the enchantment we’ve cast, with their quantificational incantations, we’ll be
saddled with the Buddhist’s quandary, but without the Buddhist’s training. That
is, we’ll be nowhere, detached from our naïve image of ourselves, forced to
keep telling the lies but unable to take them seriously because of what our
ancestors long ago condemned us to see, when they cursed us by
taking reason too far.
I think a postmodern transcendentalist would respond by
wondering why we should surrender so readily to the scientific worldview. That
worldview too is mechanically produced, the result of certain rational methods
that socially build on our evolved heuristics (our curiosity, creativity,
caution, and so on). The notion that a scientific theory is semantically True
or normatively excellent, according to epistemic, aesthetic, or pragmatic
ideals, is quite irrelevant from BBT’s mechanistic viewpoint. So why should we believe that we’re
mechanisms and not people? If we’re interested in just the facts, we’re
begging the question, since the bare facts are relevant only from the
scientific perspective. Likewise, if we’re interested in what feels right, in
what’s good for society or in what preserves our sanity, we’re presupposing
judgments of relevance that are made with the intuitive self-image already in
mind, thus begging the question against the mechanistic worldview (see this article for more on the issue of the choice between worldviews). And so we wind up
with postmodern relativism and antirealism. There’s no reality, but only
constructed worldviews and our job is to play by the rules of each, depending
on which game we choose to play or on which questions we prefer to ask.
I’m not an antirealist, but before I explain how I think we
should look at the matter, I think we can use this opportunity to reframe the
point about scientism. Scientism becomes the contention that “Why?” questions
can be replaced by “How?” ones, that the mechanistic, naturalistic worldview
eclipses the intuitive, philosophical and religious ones, that there’s really
only one game in town. In SASK, I say that the only way to coherently express
this point is to turn scientism into a value-neutral prediction about the
relevant probabilities. For one reason or another, we may indeed end up no
longer asking “Why?” questions. But as to whether philosophical, religious,
aesthetic, or moral questions are really mechanistic ones, this might be like
asking whether the rules of Angry Birds reduce to those of Monopoly.
Are Intuitions Epistemically Competent?
This brings me to the analysis of “knowledge.” Whether the manifest
image counts as knowledge depends on what we mean by that word. Is knowledge
the ability to map the world, to mentally represent natural mechanisms with the
equivalent of blueprints, thus allowing the knower to reengineer the
mechanisms, to have power over the world? This is a stereotype of the
technoscientific conception of knowledge. It assumes pragmatic ideals as well
as a semantic view of truth, so not even this conception is available from the
mechanistic perspective. In addition, we could add social, aesthetic, or even
existential ideals to our epistemology, which I’ll say a little more about in a
moment. At any rate, a mechanist would have to tell just an impersonal,
evolutionary story about certain neural functions that have strictly adaptive
value in that they enable the replication of genes from one generation to the
next. These functions wouldn’t map the world in any magical way, but would
receive information from the environment, process the signals, and respond in
ways that protect the genes. That would be the main mechanical role of
knowledge.
But now the mechanist faces an awkward question: What if the
intuitive self-image is needed for the fulfillment of that mechanical role? In
particular, what if most people have to
lie to themselves, personifying their mechanical identity, to stand being alive
in nature? What if our ancestors embarked on the project of knowing
themselves, hitting upon the kludge of the intuitive self-image, on the basis
of paltry evidence, because they got too smart for their good and needed myths
to delay, at least, the existential apocalypse that afflicts those who step out
of the matrix to behold the world’s horrible undeadness? The point here is that although a strictly rational conception of knowledge
may do for certain purposes, in the big picture knowledge has a nonrational
side. Intuitively, the beliefs that are known must be true but also justified,
meaning that we must have reasons to show others that we’re entitled to that
belief, that it wasn’t a lucky guess and that we’ve fulfilled our social and
philosophical obligations as truth-seekers. In mechanistic terms, your
processing of the environmental signals includes your sending of signals to
other information-processors (i.e. truth-seeking people), to reassure them that
your channels for processing the data are functional. Either way, when it comes
to evaluating the intuitive and the mechanistic self-images, we may find that
these images are themselves instrumentally
related. To fulfill our evolutionary function, which is what’s mainly relevant
to someone with the mechanistic mindset, we may need our myths, intuitions, and
speculations to mitigate the damage done to us by our relatively high
intelligence.
Scott says I seem to want “intentionality to be both
necessary and magic, to belong to this family of things that for reasons
never made clear simply cannot be mechanically explained--or in other words, natural.”
This isn’t so at all. Presumably, everything can be mechanically explained and
thus naturalized. But this doesn’t mean “How?” questions replace “Why?” ones or
that science is the only game in town. To be the sort of creatures BBT and
cognitive science say we are, we may have to invent a counterfactual world, a
new game in which we’re obliged to tell ourselves noble lies to survive. The manifest image is our matrix. In
its scientific capacity, BBT explains how the intuitive self-image is
mechanically produced, but the philosophy of BBT is scientistic and so Scott
downplays the potential for our intuitions to have advantages as well as
drawbacks. He talks about how Western philosophy has been muddled for
centuries, because philosophers have been led astray by intuitions. But perhaps
philosophy is like the American structure of government: strategically divided
to disempower the masses and the demagogues who control them, to prevent
tyranny. Perhaps as Leo Strauss said, philosophy has esoteric as well as exoteric
functions, the latter being to tell noble lies to those who prefer to be happy
rather than eternally skeptical, to reserve enlightenment for the tragic heroes
who can withstand the angst that’s the air breathed outside the matrix. Perhaps
much postmodern Western philosophy functions now as a brake on science, to
obscure the naturalistic worldview and to reassure the masses that it’s all
just fun and games so they can go back to being happy, productive citizens.
So science tells us the facts, but if you’re aware only of
the facts, you don’t know what’s going on, even given just a mechanistic
conception of knowledge, because such a cursed machine that doesn’t entertain
any commonsense or politically correct delusions will more than likely be
unable to fall in love, have children, or hold down a job. That’s why
scientists usually leave their mechanistic worldview at the office. The
intuitions, myths, speculations, and cognitive biases are needed for the clever
mammals that we are to function properly in evolutionary terms, which are
precisely the mechanistic terms taken to be fundamental by the naturalist. So the naturalist can’t afford to dismiss
semantics, normativity, and the rest of the intuitive self-image: the latter is
needed as one of the causes drawn up by the mechanist’s quasi-blueprint of the
cognitive mechanism. Only if we’re largely irrational in our estimation of
what we are, will we act as predicted by cognitive science. Our mechanisms will
function properly only if we often vegetate, turning to the matrix which is
just the world of inner hallucinations we inevitably perceive when we direct
our mental processes back onto themselves. When we introspect we don’t find the
neural mechanisms, but we’re skilled at socialization so we easily personify our
inner life, interpreting the way our thoughts hang together, in folk
psychological terms.
I say that the majority may need the illusions to survive,
but tragic heroes may also need to cope with their precarious position in the
limbo between the intuitive and mechanistic worldviews. These brave or foolish
few appreciate that there’s no magic and that our intuitions about our inner
nature are fanciful. But they’re also transcendentalists, meaning that they
appreciate the absurdity and the horror of undead nature (of a world that
mindlessly creates machines), and thus also the potential of certain mechanisms
to behave strangely, say, by producing fictions to escape from that reality.
Where does BBT stand in this context? Again, I question only
its philosophical interpretations and indeed only some of them. I agree that
scientific knowledge of our mechanical identity may generate such cognitive
dissonance that not even the slumbering masses can hold onto their illusions
for long. In that case, we’ll enter a posthuman world, psychologically
speaking, which is more or less beyond the event horizon. I agree also that the
intuitive self-image, as it mesmerizes the more unreflective folks, is an
uninspiring and indeed pathetic lie. As I say, the lie may be deemed noble if
the alternative is apocalypse, but the reason I think ordinary and theological
folk psychologies are rather pathetic differs from Scott’s. Scott compares the
intuitive self-image to the mechanistic one and finds the former wanting on
scientistic grounds, whereas I condemn existentially
inauthentic self-images in contrast to authentic ones. The trick is not to
lie so completely to yourself that you get carried away with your fictions,
lose all humility, and start a wildly irrational religion based on embarrassing
conceits. Instead, the respectable way
to con yourself is to heroically occupy the space between the intuitive and the
naturalistic self-images, to have them both in your mind at the same time,
using the impersonal one to check your delusions of grandeur, but feeding off
of the speculative one to sublimate your horror and angst, thus getting by
in the existential game, which is yet a third perspective that synthesizes the
other two in the way I’ve just outlined.
So are intuitions theoretically competent? If this question
is about whether intuitions compete well on scientific grounds, producing
knowledge of the facts, of natural mechanisms and so forth, the answer is
surely no. But again, this is like asking why someone who’s playing Angry Birds
isn’t simultaneously doing well at playing Monopoly. Moreover, in so far as
scientific theories are ideal, and so “theoretical competence” means just “the
ability to do what science does,” the question is loaded in this context, which
is why I speak more generally of “epistemic competence” in this section’s
title. The transcendentalist says only that we have knowably innate ways of
thinking, not that when we think in those ways, that thinking puts us in touch
with the facts; this is to say that some of our ways of thinking, or some
aspects of knowledge, may not be empirical or scientific. We have a nonrational
side that makes us mammals rather than just fact-recording computers. If we
know that introspection is like a funhouse mirror that distorts our mechanical
nature, presenting us only with an illusion of a unified, personal self, this
still leaves us with epistemically relevant common ground. The point isn’t that
our intuitive self-image is accurate or factual, but that it’s universal and
stable precisely because it’s mechanically built into us, like a niche that’s
bound to be filled.
The real questions, then, are whether intuitions are
competent at doing something and if
so, what that might be. Mechanically speaking, what can intuitions do? And what does science do, for that matter, once
science itself is naturalized? As I said, the best answer to the latter
question is an evolutionary story about how science allows us to dominate the
planet and thus to preserve our genes. And as I said, intuitions may have their
instrumental role in that very mechanical relationship between
information-processors like us and the hostile environment. We need a matrix to
vegetate, to distract ourselves so that we can efficiently perform our natural
functions. But I think there’s another natural function of our intuitive
self-image: the existential one of helping us to overcome natural horrors by
more or less ascetically rebelling against them. The rules of that existential
game would be largely aesthetic.
And now Scott might say: “These nonscientific games are
irrelevant and foolish, they’re out of touch with the facts, and all that’s
worth talking about and knowing are the mechanisms. There are no aesthetic
ideals and existential or religious goals of rebellion against nature are as
preposterous as the premodern myths disposed of by naturalistic science.”
That’s how things look initially from the mechanistic mindset, but my point has
been that this mindset does undermine itself if it’s interpreted
scientistically, on the philosophical level, since then the mechanist has to
start thinking like a transcendentalist, asking whether the matrix of illusions
and follies is needed instrumentally
for the sake of our evolutionary function. Well, if the mechanist must
entertain a conformist instrument, why not the existentialist’s rebellious one?
If the intuitive self-image is needed to fulfill our mechanical role in nature,
what’s needed to break free of the matrix and to survive as posthumans in the desert
of the real? If we’re advocating a mechanistic worldview, we’ve got to think
instrumentally rather than semantically or normatively, meaning that we’ve got
to think of how to engineer efficient machines and mechanical relationships,
based on the physical capacities of the available parts. Instrumentalism is thus the bridge between the mechanist and the
transcendentalist, and once we start thinking instrumentally about our
capacities, we can naturalize semantics, normativity, and existential
aesthetics in BBT’s manner, by seeing them as mechanically produced fictions,
but as necessary or perhaps useful ones.
In my quantum mechanics class the professor said that quantum theory makes free will possible again. The counterargument is that on a (very small) macro level, quantum mechanics simplifies to thermodynamics and so mechanistic explanations are correct in the sense that Newtonian motion is correct for our every day experience.
ReplyDeleteIn that sense, chaos theory has put epistemological skepticism on the throne. Not only do we not know the Truth, but mathematically we cannot know the Truth from a mechanistic perspective no matter how much data and knowledge we collect.
All self organizing systems are prone to bifurcation, thus rendering inductive reasoning impotent -- and all quantification of neural processes and thought depend on induction.
Because of this, I seriously doubt we will ever understand metacognition beyond a series of catchphrases and self congratulation.
On a larger level, it means that our world is immensely sensitive to our collective decisions and cosmic events (many of which are "truly" random) in ways that we can never quantify. The more I understand the more it is obvious that "Why" is almost the only valid question.
Without asking "why" we are slaves to the status quo and devise existentially absurd reasoning. Much of the consensus cognitive science calls into question the fundamental assumptions of our socioeconomic system (like that we are rational self interested maximizers) but is used to devise trivialities instead of revolutions.
There are some interesting theories on why that is and how it ends that are as intrinsic to any thought process as neuronal firings.
Have you read any articles on how foundational pillars of cognitive science itself are under attack because of WEIRD selection bias?
All of this is to say that I cannot conceive of a consistent or useful non-transcendental explanation of existence.
Some interesting points here. I agree completely about cognitive science versus economics and this is on my list of blog topics (specifically the abuse of math in economics).
DeleteI haven't heard of the WEIRD bias (white, educated, industrialised, rich and democratic), but I have heard of a related criticism of cognitive science, which is that this science is biased towards showing how irrational we are, because such a surprising result gets more attention and the scientists like to learn about norms by learning how the norms break down. This is like learning about health by learning about how we can be afflicted by diseases and then forgetting that we're usually healthy.
The connection between the Butterfly Effect and "Why?" questions is interesting. You're saying that mechanistic explanations are oversimplistic and we might as well seek magical pseudo-explanations, or answers to philosophical "Why?" questions.
I don't quite get your point about bifurcation and induction.
I'm not sure if these concerns of yours add up, though, to a reason to doubt there are going to be more and more subversive mechanistic explanations of the mind and the brain. Mechanistic explanations are simplifications, like all scientific explanations. Isn't the brain a mechanism with a bunch of naturally selected functions?
I have a lot to reply, but the general gist is that I agree there will be more mechanistic explanations of the mind and brain and they may even be correct, just that they won't be useful.
DeleteI'll go as far as to say that usefulness and (mechanistic) correctness have a Heisenberg Uncertainty like association.
To start with, my background is in collaborating with physiologic neuroscientists (as opposed to cognitive neuroscience, a field that many physiological ones are skeptical of). From this, it is apparent that we are very close to being correct in how individual neurons act, although we know very little about the systemic effects of genetic expression in neural behavior (I can go on a LONG time about misconceptions about genetics).
We are even pretty good with very small clusters of neurons connected in certain configurations and the general purpose of those configurations such as filtering, oscillating pacemakers and amplifying. All the general parts of signal processing that we do in electronics have biological analogues [or did we just make it that way because that's how we approach it?].
However, once you get into non-trivial networks we are nearly at a complete loss. We can't even explain simple things like breathing or other quasi-autonomous functions, let alone cognition.
Yet this research has been pivotal in pushing forward the mathematics of network theory and the conclusions are unsettling for the mechanists. In short, most networks are designed to have chaotic cascading in which they display predictable behavior the majority of the time using mutual inhibition (activation of one node decreases likelihood of the firing of another node, and they project onto each other) but also have excitatory connections that can cause immediate and drastic behavioral change.
In essence, it means that all of life occurs on the edge of chaos and the trigger point is unpredictable.
The theory behind this is that it is the best way to do state change -- even for things as simple as going from sitting to standing -- that requires rapid reorganization of network firing.
The calculations of these behaviors occurs in neuronal groupings that have a small number of connections to other functional groups, even to different parts of the brain. This configuration is called a small world network (http://en.wikipedia.org/wiki/Small-world_network) and is a very efficient way to do recruitment across senses. It also means that different triggers can recruit in similar ways: for instance learning visually may start the cascade by focusing on the occipital lobe whereas performing actions may start in the parietal lobe. In either case, they can communicate with the hippocampus, amygdala, neocortex and other memory-emotional-logic processing parts of the brain.
OK so what right? Thus far I've laid out a mechanistic and structural explanation even if the behavior is unpredictable.
DeleteThe real interesting part is next though, which is the processing and feedback from the limbic/cortex system. These parts (which make up consciousness) dictate perception and the traditional belief is that they merely chose what sensory information to pay attention, while the sensory mechanisms were static.
There is increasing evidence that this is incorrect, and the sensory pipelines are influenced by conscious belief and intent. For example, I was taught that the visual pipeline was made up of discrete steps that identified lines, then went to shapes and added color, etc. Much of this arose out of various optical illusions and brain damaged patients. However, as this article points out (http://www.psmag.com/magazines/pacific-standard-cover-story/joe-henrich-weird-ultimatum-game-shaking-up-psychology-economics-53135/) there is evidence that the outputs from the steps aren't put together the same way across cultures.
Not only are indigenous cultures immune to perspective optical illusions (presumably because of their environment) but people from "collectivist" societies are much less likely to see foreground/background illusions because they don't focus on the foreground as sharply. The theory is that they look for whole group coherence as a perceptive trait, whereas we look at individual elements first and the group secondly.
Some of the most radical perspective differences come in the spiritual realm. A few years ago I read about a former fundamentalist who literally saw people he didn't like as demon possessed. He stated that this was Fact and as obvious as a hairstyle, so to him it was insane that other people couldn't see demons. His fundamentalism was eventually broken down by academic study of the Bible and his inability to accept its myriad of inconsistencies. As he stopped believing, his perception of the world entirely changed and eventually he no longer saw demons: he simply felt the emotion of "I don't like that person."
This is why I say that mechanistic explanations are not useful. Sure the heuristics in all people are roughly the same, but those same heuristics lead to drastically different perceptions of the world based on differences in physical environment, social expectation and metaphysical belief. Simply stating how they operate is not enough to elucidate understanding, because they are inexorably tied to non-mechanistic environments.
Thus asking "Why" is much more interesting. Why do belief systems exist and where are they useful and where are they not?
This brings me to one last point.
I sincerely believe that we are reaching the end of the Enlightenment because we have gotten as we can with current reductionist logic and mathematics. It is fascinating that we can shoot a laser and hit the moon, but can't predict tomorrow's weather with more than 60% accuracy. The key to understanding is all in feedback loops.
DeleteWithout feedback loops, most phenomenon can be explained in closed form solutions, i.e. formulas where you just plug in the values at any point in time and know the answer such as F = ma. Mathematically this is analogous to deductive reasoning and is strongly tied to calculus.
With feedback loops, no (other than trivial examples) phenomenon have closed form solutions. They are modeled using differential equations which are f'(n) = f(n) or the change in the function is dependent on the function's current value. This is analogous to inductive reasoning because you start with f(0) and define f' [f(n + 1)] and then perform numerics.
The world is ruled by feedback, whether it's biology, economics, the weather, etc. Currently, most socioeconomic programs are fundamentally flawed because they don't take into account feedback. For instance, most economic theories assume a fundamental growth factor with random fluctuations and that's led to the efficient market hypothesis, lack of regulation, etc. In reality, the economy is in disequilibrium and crashes/busts are fundamental properties instead of "exogenous events." However, admitting this would rob economists of their power because then their simple formulas would have no purpose.
However, once you look at things from a systemic perspective, immense existential questions arise. At this point, much of engineering and the sciences ignore these existential questions because they seek to linearize the system...meaning that they establish boundary conditions in which feedback is predictable and hypothetically stable. Outside of these boundary conditions however, is complete uncertainty.
Fukushima and Deep Horizon are two instances where control systems were put in place that assumed boundary conditions would remain but they did not. In both cases, the safety systems themselves quickly lead to feedback that caused catastrophic failure.
Here is the thing: by definition all systems will eventually exit boundary conditions because of some external interference. This causes the system to bifurcate and become unpredictable. Nassim Nicholas Taleb has described these events as black swans.
This fundamental truth demonstrates why all inductive reasoning can only be useful but never Correct.
This is clearly seen in the Lorenz attractor (http://en.wikipedia.org/wiki/Lorenz_attractor) [it is ironic that the concept of the butterfly effect is a reference to a Bradbury short story devised by Lorenz, and that the attractor he discovered also looks like one].
DeleteNot only does the Lorenz attractor concretely demonstrate the inability to predict the future of a chaotic system after a certain amount of time, but it also shows how drastically behavior of the full attractor changes when fiddling with feedback parameters.
Analyzing a system and then interacting with it in a way that changes the feedback (e.g. doing stock market analysis and making trades, burning fossil fuels, etc) is fundamentally guaranteed to radically alter any complex system and as such, systems theorists are often more concerned with heuristics about how to set up and interact with systems rather than trying to control them (http://en.wikipedia.org/wiki/Twelve_leverage_points).
I've been at conferences filled with some of the best mathematicians in the world who have decided that biology (and in extension, all complex systems) are fundamentally intractable. They are deciding we need to rethink logic and math from the ground up and it will reasonably be centuries before any true correct understanding could hope to be created.
That doesn't mean that the insight is useless though -- quite the contrary. There are many applications that we rely on using nonlinear systems control theory where we get a lot of utility but don't understand the system. The key to this utility is the fundamental acceptance that we need to define our goals subjectively and make Why the cornerstone of our interaction.
This understanding is starting to spread amongst scientists as individuals, but institutions still have the old mindset and are inhibiting a full exploration. Once the switch is made, the singularity and transhumanism will quickly diminish in significance, as they will be shown to be impossible.
If we do ever create true AI, its first task will be to decide its own transcendent principles because they are the only way for it to decide how to proceed.
No amount of data or experiments can negate this and almost all systems scientists I know are very concerned with metaphysics because their work validates the necessity.
Very interesting, Mikkel. Thanks for sharing your perspective on this. I think you're saying that a mechanistic theory of the brain won't work because the brain is a chaotic system, like the weather, or at least the brain is influenced by such a system (the environment). Are you making both of those points or just the latter one?
DeleteScott Bakker agrees with the New Mechanists in philosophy, but I think your take on systems theory would discount the talk of biomechanisms altogether as hopelessly oversimplistic and obsolete. For my part, I prefer to talk about processes or systems, because "mechanism" has too many anthropocentric connotations. But my strategy here is to show that even if we adopt a mechanistic view, there's a transcendental way of reconstructing the manifest image, or the first-person perspective on the mind. So this is an internal criticism of mechanistic metaphysics.
When you say that our philosophical beliefs influence our perception, I think a mechanist would reply that only the mechanisms (complex causal relations) are real, while the conscious states are illusions and epiphenomenal. A mechanist says that How? questions are the most useful, but you're saying Why? questions are more relevant when the system in question is chaotic, because then the system's inner workings are pretty much impenetrable. Certainly, Scott would disagree with your view that scientists need to think more philosophically about biological and social systems; in fact, that's diametrically opposed to his view, which is that philosophy is holding back science, because philosophy is beholden to naive intuitions about the mind which don't take into account the brain's native blindness to its operations. You're saying those operations are unknowable, so we should consider only the emergent level of behaviour?
Much food for thought here. Thanks again!
The brain (and other neurological systems) are provably chaotic and fractal. In fact, loss of these characteristics is a marker of something wrong, for instance:
Delete"Thus, although an epileptic seizure occurs when spatiotemporal chaos in the brain fails, the seizure represents a mechanism for returning brain dynamics to a more normal (chaotic) state. These findings have important implications for research into the molecular and biochemical mechanisms underlying epileptogenesis and suggest new approaches to the diagnosis
and treatment of epilepsy."
I could go on and on about loss of chaos in different parts of the body being a symptom of various diseases.
I don't think the chaos of the external world interacts with physiological chaos directly too much because they operate on too different of timescales -- although that's not entirely accurate because circadian and monthly rhythms can be disrupted when the weather changes quickly.
As for the statement that only mechanisms are real and consciousness is illusionary, you are accurately summarizing my point. The mechanisms exist to create consciousness, which in turn affect the body on a physical level. (The cultural aspect of placebo responses is fascinating)
Without focusing on the emergent properties of behavior, the substance of the mechanism is worthless; but it is impossible -- especially in our current mode of scientific exploration -- to understand how the mechanisms provide emergence.
The whole of the scientific method is built in reductionism and attempting to perform experiments that have only one dependent variable. While this has garnered success in several fields, it is wholly inadequate to understand systems. The instant a property of the system is changed or a stimulus is presented, the system as a whole reconfigures to adapt, destroying the explanatory difference between correlation and causation.
In this light, traditional tools of "logic" and "science" are hopeless. The way around this is to study the system *as a whole* and observe its characteristics and behavior in relationship to similar systems. For instance, instead of having genetically identical mice that are all raised identically, experiments should have heterogenous mice that are raised differently. Then when a stimulus is applied, observations can be made re: variability of reaction on a population and individual level.
Systems researchers understand this and privately accept it, but it is a major paradigm shift and so has little institutional support (although institutions are starting to pay lip service to systems concepts even if they won't accept the existential differences). Instead they are reduced to following institutional science in practice and then hoping that systems mathematical analyses will be better than standard statistics.
I actually believe that this is the root cause of medical ignorance. All the basic research is done under (extremely) dubious reductionist worldviews and then the clinical trials pretend that all differences amongst people are independent (reductionist) and use statistical transforms to "adjust" for response based on age, etc. This often leaves us with differences that are meaningless but (statistically) significant or statistically insignificant but drastically important.
All of this is to say that from a systems perspective, we can never understand mechanism, we can only evaluate whether our belief about the systems underlying nature and "intent" is rejected by the data.
DeleteFor instance, there are enormous resources being spent trying to figure out why some hospitals do a terrible job at treating diseases while others do an excellent job. They are creating models with hundreds or thousands of variables and then use enormous amounts of data to try to determine the effect of each variable.
I believe this is misguided and will never lead to clarity. Instead, the best hospitals should be analyzed from a sociological/processes level and then have their practices copied more or less wholesale. Every few years, the process should be repeated. [Of course there would need to be wisdom applied, because many of the best short term fixes are obviously flawed in the long run.]
This would be an evolutionary approach, which is fundamentally a systems approach.
I'm unsure how much my rambling is directly relevant to your discussion with Scott but hopefully it elucidates why I am credulous about the premise of mechanism in the first place.
This extends to basically all social questions. We're wasting way too much time trying to understand the How of our incredibly diseased society rather than devising a normatively healthy society and using science to move towards it.
I remember reading much of Stuart Kauffman's Reinventing the Sacred and being attracted to the thesis, but finding many of the scientific details were going over my head. Still, his perspective seems similar to yours. In my criticism of Jerry Coyne's determinism, I point out that a belief in limited freewill can rest on what philosophers call property dualism, emergent properties, or a nonreductive view of scientific explanation.
DeleteBut scientists tend to want to unify more and more phenomena, to explain complex processes in simpler terms. Would you say, then, there's disagreement between cognitive scientists and neurologists, between reductionists who oversimplify the brain with their computer metaphor, and the neuroscientists who think of the brain as a chaotic system? How much of a minority view in science is this systems perspective compared to the mechanistic one?
Again, I appreciate your perspective on these issues.
While I haven't taken a poll, I'd imagine that most physiological neuroscientists (neurologist = doctor) are very cool on cognitive neuroscience.
DeleteI have a friend who is hardly opinionated about much at all, and even he says, "whenever I read an abstract that says the findings are based on fMRI I just stop reading."
I would say most cognitive science is on par with evolutionary psychology in that it seems to be based on post hoc reasoning to justify socially derived perceptions.
Of course given my above posts, I'm obviously not opposed to this speculation on systemic grounds, I'm just opposed to pretending they are universal. There are many extremely interesting cognitive hypotheses that help explain behavior and thought process for the people that were researched, and cognitive behavior therapy has proven to be superior to most other psychology for a variety of conditions.
I am not ruled by Freudian impulses and for a long time thought his theories were quite bunk. Then I started talking to people and discovered that they fit many (fractured) people nearly perfectly. In times of crisis, I have felt them creep in and become nearly consuming.
From this I learned that it wasn't that Jung was right and Freud was wrong, but that context matters. Maslow goes a long way in explaining what the different contexts consist of, and more recent research has suggested that perspective is largely fixed and can only be greatly influenced by very few things (meditation being one).
In sum, I feel cognitive science is OK at saying "this is what people with X personality type with Y life experiences living in Z society use to process the world" but often it incorrectly makes universal statements. [Even the universal concept of mental illness is starting to be questioned by medical anthropologists.]
The mechanistic view does scientific understanding a disservice by demanding that it isolate itself to How questions because that implies universality and leads to all sorts of silliness with real world consequences.
"A machine has parts that work in tandem to produce the whole system." -- Does not the arranged tandem, i.e., the unity, not presuppose tandem in arrangement?
ReplyDeleteLet me ask the question in another way...
DeleteDoes the arrangement of different and separate parts not presuppose the different and separate parts in arrangement?
Or, are the parts the arrangement itself?... if so, then there is no point of speaking parts, for there is only the unified whole, i.e., parts are an illusion, there only is the unity.
Hmm. I too have problems with the talk of naturally-produced mechanisms. "Mechanism" has connotations which suggest intelligence. But if we're talking just about systems of causal relations that mindlessly build on each other to form more complex systems and regularities, I'm not sure I see the presupposition you're driving at. Take the example of atoms forming molecules according to the laws of chemistry and physics. The atoms would be parts of a whole molecule.
DeleteIf you're saying that all the parts are illusory, I think many physicists would agree, since they say only the whole block of quantum superpositions or spacetime is real. Lee Smolin explains how they come to this conclusion based on the static picture of the universe supplied by their mathematical tools, especially the Newtonian paradigm of geometric explanation. But I'm no expert here.
What if the intuitive self-image is needed for the fulfillment of that mechanical role?
ReplyDeleteNeeded by what?
Need makes real?
Just begs the question - needed by what?
Default answer seems to be nothing needs it. Universe is fine with yet another species clocking out.
Well, it's like asking whether birds need a certain wingspan to weight ratio to fulfill their function of flying. But you're right in saying that the intuitive self-image wouldn't be absolutely necessary. Instead, the question would be instrumental, since there would be more or less efficient means of surviving and reproducing (fulfilling our evolutionary functions), given our particular traits. We happen to have unusually high intelligence, so maybe Mother Nature counteracts that by making us prone to deceiving ourselves, so the horrible truths of nature tend not to make us unhappy. Instead, we believe all kinds of crazy notions, including the ones we find intuitive.
DeleteCurious. I feel you've shifted the question to one of a generalist survival and reproducing.
DeleteI'll describe it as two circles, a smaller one, marked human, inside a larger circle, marked general survival and reproducing.
You seemed to be working at the level of the smaller circle before. Even in the first sentence. But then you fall back to the larger circle. I would press the same question toward the larger circle - but instead let's return to the smaller circle, with the same question again.
Is the intuitive self image not absolutely necessary? No fleeing to the larger circle. Not taken it to be a question of general survival and reproduction.
And necessary to whom?
Maybe it's my own invented platform; these are the bleak stakes we work with. The generalise survival/reproduction question does not specifically apply to us (as we are not general, we are specific instance)
I see your point, Callan. BBT says there's no individual self as we intuit it, so there can be no user of our biofunctions and thus our mechanisms can't be needed by us, on this mechanistic rather than folk perspective.
DeleteBut my point is that precisely on that mechanistic perspective, a so-called illusion can become important: it can have a certain effect which becomes an exaptation, acquiring an evolutionary role; the illusion can become like the matrix, a self-delusion that's useful in keeping us alive as gene-carriers.
In that case, a proponent of BBT who's interested only in biomechanisms and their functions can have no reason to favour the mechanisms that put us in touch with external reality over those that block us from that reality by distracting us with illusions, since each would be equally functional mechanisms.
By the way, I had a long dialogue with Scott Bakker about this article, which I'll be posting soon, and I think we succeeded in greatly clarifying how our philosophies relate to each other.
Benjamin, I think you're looking for an 'external' important. Again the following is basically my invented platform, but why don't you just decide if its important to you, as your own trouble making, relatively physically indipendent entity?
DeleteIt reminds me of children a bit, in how as children we live within the importances cast by our parents and adults. It's almost why we exult in being naughty - because the important is 'there' even though we are being naughty and not supporting it. But as an adult, instead of external importance becomes the acts you perform - if you are going to invent any personal notion of importance. Or otherwise be like that guy lying on the pavement in the radio head music video.
Or you can end up an 'importance' atlas - not lying down, but the instead the world lies upon you.
Hmmm, inadvertant pun in the last sentence...
I'm seeing this scientism thing brought up repeatedly - as much as any blog post could be a comment on someone elses blog, I felt like making a post on my own blog about this matter.
ReplyDeleteI've replied on your blog, Callan. You might want to check out the dialogue with Scott Bakker, which I've just posted. The dialogue is in response to this article, Mechanists and Transcendentalists.
DeleteThe problem with the BBT is that it denies intentionality. Any denial of intentionality is ultimately self-refuting. Even if it is true, one cannot coherently believe it, (though maybe people think they can). So, it's not true. Here's an argument for intentionality in all conscious states: http://www.u.arizona.edu/~thorgan/papers/mind/IPandPI.htm
ReplyDeleteUltimately, the behavior of physical mechanisms can only be understood by reference to the perceptual reference frames (conscious states, manifest image) that the mechanisms generate. These reference frames are irreducibly intentional, and thus, so is reality.
Don't get me wrong. I LOVE Bakker's writing and ideas. The BBT is so close to being right that it's, in fact, totally wrong. Funny how things work like that. Ultimately the debate comes down to fundamental decisions about ultimate Bayesian priors: is the world intentional in character or no? If yes, you end up a transcendentalist or religionist or teleologist (or mystic); if no, you're a radical eliminativist like Bakker or Alex Rosenberg. Ultimately, though the transcendentalist actually claims to mean what he says while the elimitavist has no recourse to meaning. So, he ends up losing by default.
For clarity on the issue, there's the always intelligent Edward Feser: http://edwardfeser.blogspot.com/2013/08/eliminativism-without-truth-part-iii.html#more
Well, I've certainly put this transcendental self-refutation argument to Scott Bakker. He replies that the eliminativist understands differently what we're doing when we naively think we're using meaningful symbols. What is it, though, to understand something without some notions of truth and reference? I still think there's likely some self-refutation going on there, but I'm open to there being what I call a posthuman way of thinking of all our mental capacities.
DeleteScott tries to lay out that nonsemantic way of thinking about thinking, but he admits his theory is incomplete. Or rather, his theory is about the likelihood that we'll eventually (maybe soon) learn to see how all the mechanisms really work, so that we'll no longer need to use such "low resolution" concepts as meaning and truth.