Blog Coordinator

« The Passing of the Torch (Yet Again)! | Main | Plans »



Feed You can follow this conversation by subscribing to the comment feed for this post.

I get to be first, by right of time zone. Anyway, I wanted to thank Peter for his gracious response. He took my criticisms in the spirit intended, and I hope my admiration for his book comes through despite my criticisms of some aspects. Briefly: it's important, but not - I think - for the free wild debate.

Let me also respond to Peter's response. Sure people have seen that exclusion argument applies to genes too; Louise Anthony has written insightfully on this. My view, in fact, is that Crick and Watson and their successors makes it much more (not less) plausible that genes owe their causal powers to the proteins that realize them (of course, causal powers drain away from that level, to the molecular and ultimately the physical). My hope is that we can accept this claim by way of accepting token-identity (I don't see why type identity is forced on us; perhaps someone will enlighten me). But not even type-identity threatens free will, so far as I can see.

Everybody's right! (except Kim.) Neil is right that, to the extent that Kim's argument is focused on the causal powers of qualia (or phenomenal consciousness), it's likely that theories of free will need not address the argument. Consciousness and the causal relevance of conscious mental states (CMS) matter to free will, but it is the informational (or representational or integrative) properties of CMS that matter, not the elusive phenomenal properties. It matters that our conscious deliberations make a difference to what we do, because the reasons that we deliberate about have to play the right role in our actions.

Peter is right that Kim's argument fails as an argument against the causal powers of the informational properties of CMS, both because multiple realizability suggests that it is the information (of the CMS at time T) that is causally relevant and not (just) the particular physical state that happens to be the realizer of that CMS at that time (this may end up being token identity), and because as Peter notes, Kim's argument is a reductio if it entails that nothing has causal powers except whatever exists at the lowest physical level (if there is one). The argument, in my opinion, only seems to work for things like phenomenal consciousness because we have no theory that helps us understand the relationship between consciousness (as it feels from the first-person p.o.v.) and mushy brain states. (No 'neural code' in Peter's terms.)

But Peter is right that it is our first-personal consciousness that matters to people when it comes to free will. We want our conscious self to make a difference to what happens. So, securing the causal relevance of informational properties of CMS (or Mele's 'neural correlates') won't seem to secure free will unless and until we have a theory of consciousness that satisfactorily explains the relationship between the neural correlates (and the informational properties) and the subjective experiences of consciousness. No easy task, but theories like Peter's can help us on our way.

Finally, Neil is right that putting some indeterminism into the brain in the right places does not help respond to Kim's arguments (unless the causal closure principle is also rejected). Conversely, the causal relevance of consciousness can be secured even in a deterministic universe by responding to Kim's arguments in some other other ways discussed above.

Thanks for the interesting discussion Peter and Neil!

Is there not a disanalogy between causation of specific physical states by mental states and the idea that "it is genetic information qua information that makes twins look alike"? The fact of two things looking alike is a fact about information, not its realisation. If we asked whether genetic information caused the complete, specific physical properties of the pair of twins (at some time), that would make a better analogy with the problem of mental causation, and I could see how someone might deny that genetic information can cause those specific physical states (because of worries about exclusion). They might insist that genetic information causes it to be the case that some physical situation in which the twins look alike obtains, but not which one.

Of course, we could, by analogy with genes, drop the demand that mental states cause physical states, in favour of the demand that they cause it to be the case that some physical situation in which [some kind of multiply realisable fact] obtains, but not any specific one. Maybe we only expect mental states to cause physical ones because we get that mixed up with this more appropriate kind of causation, the kind already found in genes and so on? Though I must admit I would still quite like it to be the case that my mental states cause various physical events, not just the fact that some disjunction of possible physical facts must obtain.

On this: "If c does not provide sufficient grounds for why one possible outcome occurs over another, exclusion of overdetermination cannot be used to rule out the possibility that the physical realization of present mental events might bias which particle possibilia will become actualia in the imminent future." - is the idea that, even if every physical event has a physical cause, if that cause could have caused something else instead, then there is a role for the mental in deciding which possible effect was actually caused? I think this would just be a violation of causal closure. There's not much difference between saying that X happened only because there was a physical state that could cause X or Y, and a mental state made it cause X, and saying that X happened only because the combination of a physical state and a mental one caused it.

It is the lot of the peacemaker to be cursed in their own county. Or so Eddy may conclude after my response.

The exclusion argument just isn't about phenomenal consciousness. It is about mental states and their causal powers. It is true that conscious mental states raise special problems (because consciousness seems so hard to naturalise) but the problem is - as Peter recognizes - perfectly general. If there are zombies, the exclusion problem applies to their mental states too.

I don't think appeal to multiple realizability helps one bit. If you think - as you seem to - that multiple realizability entails (or makes attractive?) token identity rather than type identity, fine: the problem still remains: is it the mental state *qua* mental state playing the causal role? How? Barring overdetermination, there doesn't seem room for it to play this role; the physical state to which it is (token) identical does all the causal work. Multiple realizability is red herring here.

The argument isn't properly construed as a reductio, no matter how far we extend it. Remember, the conclusion isn't "mental states don't have causal powers"; rather, the conclusion is that "mental states have causal powers by being identical to the physical states to which they are identical". If we generalize this to all the properties of the special science, we get the entirely unabsurd conclusion that these states have their causal powers by being token identical to physical states.

Neil, to clarify, I meant that the exclusion argument is about phenomenal consciousness in that Kim himself takes it to show that qualia are epiphenomenal, but (consistent with everything you're saying) he says that because he thinks other mental properties (and other high-level properties) are identical with physical states and hence causal in virtue of the causal powers of those physical states. As such, and using the conclusion you suggest, it may not be a reductio, yet I take it that many philosophers and scientists think it is obvious that high-level entities (such as genes, neurons, clouds, organisms, pistons, etc.) have causal powers in virtue of the type of entities they are and not (solely) "by being identical to the physical states to which they are identical."

Much depends on one's theory of causation. Kim is a production theorist and that makes his argument more plausible. On other causal theories, including contrastive, counterfactual, and interventionist theories, multiple realizability is not a red herring at all. Rather, the fact that informational state S1 could be realized by a range of physical states P1-PN and that info state S2 counterfactually depends on S1 but *not* any one of the specific physical states, including the one that actually realizes S1 on this occasion (e.g., P3) suggests that S1 is what makes a difference to S2 in a way that P3 does not. If we want to causally manipulate S2, manipulating P3 may not do it (e.g., if we alter it to P1 or P4, or one of the other S1 realizers); rather, we need to manipulate S1 (yes, by altering its realizers in the right way, but the right way will involve considerations of the S-level, not the P-level). S2 *rather than* S7 occurs *because* S1 *rather than* S1' occurred, and not because P1 rather than P4 occurred.

P.S. I'm signing with the fake middle names in honor of Manual "Mr. October" Vargas who I'm going to say invented that idea.

Eddy, thanks. I now see why you think multiple realizability is relevant here. I still don't buy it. Here's why: the counterfactuals are true both at the level of the mental states and at the level of their realizers (it is for this reason, in part, that I think knowing "code" makes the exclusion problem more, not less, pressing: geneticists in fact know enough to alter a phenotypic trait not be altering the gene but by changes at the molecular level). So we get the problem of overdetermination again. I think, though, that invocation of the counterfactuals also shows why we don't need a solution: counterfactual dependence is enough (it gives you everything that John Fischer wants, for instance). As to qualia, I don't see why there is a special problem here for a physicalists. If conscious states are reducible physical states, then we can say exactly what we say about propositional attitudes wrt causation.

In any case, we agree on the essential point: the exclusion argument is orthogonal to issues of determinism.

Thanks for the exciting discussions everyone! I have a conceptual question about mind or information that philosophers could help clarify for the brain sciences: What does mental or informational supervenience precisely refer to in the context of neurons? Is the information realized in the signal sent to a decoder (say presynaptic spikes sent to postsynaptic neuron), or is the information realized in the decoding of those spikes by the postsynaptic neuron, or is the information realized in the setting up of informational criteria for postsynaptic firing, perhaps long before any presynaptic spikes arrive? There are three different types of information physically realized here that are not simultaneous and not overlapping in space or even matter. If information is realized in the entire durational process, however, this gives rise to other problems for the idea of supervenience, or at least its presumed synchronicity.

Supervenience assumes that mental states are synchronically determined by underlying microphysical states (the realization thesis). But certain physical attributes are inherently durational, and it is possible that mental events might not be well defined at a given instant. If mental events are inherently durational, they must be realized in physical processes that are extended in time. For example, if information is realized in spikes that are not instantaneous, the realization thesis may need to be modified such that mental states are not synchronically or instantaneously determined by underlying microphysical
states. For example, the realization thesis might be wrong if it is the temporally extended transition from P1 to P2 that realizes M2, rather than the occurrence of P2 in isolation at an instant. On the standard account of supervenience, whether a physical state P1sub1 leads to P2 or a different physical state P1sub2 leads to the same P2, P2 realizes M2 identically. On the present account, however, the mental states realized by these two different transitions to P2 could differ. If that were the case, then two different mental states could supervene on two identical brain states, because of the two different paths taken to that brain state, in violation of the fundamental assumption of supervenience as presently synchronically conceived.

I don't think that's right, Peter. Supervenience is the thesis that there is no difference in the supervening property without a difference in the base property. It doesn't commit us to claims that the relevant difference must be synchronous. Supervenience was originally introduced in moral philosophy; there it's clear that differences in moral properties can depend on changes over time (what is the moral property that supervenes on the physical properties that make up "inserting a blade in someone's flesh"? A lot depends on what happens before the insertion: was the "victim" given anaesthesia and an informed consent form? Or did they just happen to wonder into an operating theatre and lie on the table, only for a crazed surgeon to assault them?)

Ok, I'm going to ask a stupid question. What about all this settles issues of control that connects with the the mental side of things? It seems to me that any resolution of issues about supervenience of the mental on the physical cannot settle this issue whether determinism or indeterminism is the case if the mental does not acquire some additional control in the offered account. An account that merely adds in future possibilities for minds that complicates causality questions by introducing indeterminism can't by itself add in sufficient conditions for control, and plausibly only introduces undermining conditions of control by chance (from the usual accounts of quantum indeterminism). I'm not entirely sure I'm aligning with Neil on this or I'm particularly well-informed on the crucial issues, but I seem to side more with Neil as I understand all this.

Regarding Neil’s support of token identity, is it not a problem that identical things should have identical properties? But a mental state, such as a thought of democracy, has no mass, whereas the neural processes that realize that thought do, so they cannot be identical. To avoid this problem, we could identify the mental with the information that is realized in neural activity. Such information is multiply realizable in countless token physical states, whether neuronal or particulate. But is this not just another form of token identity? No, for that to be true it would have to be true that the information realized in a token physical state is identical to that physical state. This runs into the problem of incommensurable properties again.

In my book I argue that information is not token identical to particle states per se, but rather arises as acts of decoding of those spatiotemporal relationships among particles to which the decoder is tuned. Information arises from immaterial spatiotemporal relationships among material entities for a decoder tuned to this pattern of immaterial relationships. For example, many neurons are coincidence detectors, but coincidence has no mass; it is a temporal relationship among things that have mass. Multiple realizability is central to this conception of informational causation because a decoder responds to immaterial relationships among particles that can be realized by countless particle states. It is materially realized--but themselves immaterial--patterns that are primarily causal in a system built on pattern detectors. On this account, information is not carried primarily by the amount or frequency of energy, but by its phase relationships. Note that spatiotemporal relationships are not made of matter in addition to the constituent particles. And such patterns also do not obey the conservation laws obeyed by amounts of energy. Patterns or phase relationships in energy can be created and destroyed. Pattern causation or energetic phase causation, where physical consequences like channel opening or neural firing, are triggered by a spatiotemporal pattern of input, was perhaps the most basic innovation of biological systems. Prior to life, physical causation boiled down to local energy transfer among particles. In fact, I think it is this Newtonian conception of causation that lurks behind the exclusion argument. Global patterns, such as coincident arrival of spikes, were not causal before the existence of coincidence detectors. Life introduced a new kind of physical causation into the universe it seems, one where causation occurs among a succession of immaterial patterns. A pattern is not identical to the neural activity that carries the pattern because the pattern only exists relative to a decoder set to do something, like fire, when that pattern has been detected. The pattern has no real existence as a material thing apart from that decoder. For example, the big dipper as a pattern triggers the firing of neurons tuned to this pattern, but there is no big dipper really out there in the world. Sure there are suns at vastly different distances. But the pattern of the big dipper emerges as an accident of earth’s placement. This pattern has no physical existence as a thing made of particles and energy over and above the suns that cast light. And this pattern counts as a pattern only if there is a big dipper detector that fires in response to it.

Neil like Kim is pushing for a view of all higher-level causation seeping away to the level of strings or whatever is at the bottom level, making it epiphenomenal. But if macroscopic settings, such as synaptic weights that realize informational ‘pattern detection’ criteria, bias which subset of lowest level events can be realized from among all possible such lowest-level events, that is top-down physical causation. In fact it is top-down (physically realized) informational causation.

Appeals to intuition like this (brain states have mass;beliefs don't have mass; therefore brain states are not identical to beliefs) have a distinguished lineage in philosophy. The problem is that we don't expect a posteriori identifications to be intuitive. If someone were to say "H2O can't be water, because water is wet but a formula can't be wet" I wouldn't be moved. The identification proposed is a posteriori, so the appeal to intuition carries no weight. This is especially the case when we're asked to decide between two views, one that is somewhat counterintuitive and the other which is totally mysterious: totally mysterious because no mechanism for top-down causation is provided.

Alan, you're skipping ahead to the next set of issues. In my review I gave several reasons to think that the debate is irrelevant to free will; you're skipping ahead in that we've focused only on the first necessary condition of Peter's argument being relevant - that indeterminism bears on mental causation - whereas you're focusing on the second. Libertarians sometimes write as if establishing that mental processes are indeterministic is establishing the truth of libertarianism. But I think it is an even money bet that mental processes are indeterministic; that's merely one of the jointly necessary conditions for the vindication of libertarianism.

Neil--then thanks to your comments I think I'm not daft; at least I get the stacking of necessary conditions in my understanding. Then I'm meta-wondering whether the consideration of necessary conditions is well-ordered.

Eddy, I think you're much too sanguine about epiphenomenalism about qualia. After all, some significant volitions are directed in part at subjective experience. On a view of mental causation which allowed only information a causal role and also denied that qualia are a type of information, it would follow that nobody ever acts for the sake of qualia. We no longer get to pursue the peculiar sensations of sex, drugs, or rock&roll! (Although paradoxically, we might still get to pursue *pleasure* as such, if that can be defined in a purely functionalist way.) This wouldn't be a denial of free will as such, but it would still be - well, weird.

There is at least one neuropsych theory, however, saying that qualia *are* a type of information: So let's not throw them overboard from the causal network, just yet.

Though not as strong or wrong as type identity, I still think the token identification that Neil is advocating is too strong to correctly capture the relationship between mind and matter. A weaker view is one identifying mind with the information realized in the sorts of physical events that neurons carry out. This is weaker because the same information can be realized in countless different physical bases. The mental is not identical to any one physical token realization, because if it were, tokens would have to be, by transitivity, all physically identical to each other because they are each token identical to the same mental state. But they are all physically non-identical tokens. So if the same information can be realized in multiple physical ways, token identity is incorrect. This is true of mind or any information multiply realizable in matter.

Even if an identification is a posteriori, if two referents are identical, should they not have identical properties? Or is Leibniz’s law not applicable? We are not talking about Fregean senses being identical here, but the thing-in-itself that is referred to. ‘H20’ as a formula is obviously not identical to that which I experience as wet. Those are two different senses of referring to water. But the physical substance that H20 refers to, and the physical substance that I refer to as wet, if they are indeed identical referents, must have identical properties if they are the same referent. Token identity theories presumably want to identify the referent of ‘mind’ with the referent of ‘brain.’ So it seems to me the problem does not go away; If the referent of 'information' lacks mass or extension, how can it be identical to its realizing physical basis, which has mass and extension?

Alan, Neil and I are wrestling over one of the most basic questions in the mind-body debate, namely whether information can be causal at all if it is physically realized. If it cannot be, as the exclusion argument purports, then aiming for an understanding of the types of neuronal causal chains that could give rise to free will or mental (downward) causation is pointless, because, as Block put it, all higher level causal chains “seep away” into causal chains at the root-most level. Neil and I appear to occupy nearly opposite positions on the spectrum of positions about this. I think the most central disagreement is whether information can be downwardly causal. We are both physicalists, so when we talk of information (qua information), we must mean physically realized information. Neil comes from the reductionist, eliminativist and perhaps determinist tradition, and I am arguing in the tradition of emergentist and indeterminist anti-reductionism. It is an old debate for sure but one worth having again because science has made progress that, I think, can soon settle the debate in favor of emergentism and anti-reductionism. At its root, to believe that information can be causal, one has to be willing to take the strong stand that (physically realized) information can influence events at the root-most physical level to turn out one way versus another. If determinism is the case, exclusion applies because events at the root-most level are governed by deterministic laws of particle interactions, which are then sufficient to account for events at that and all higher levels. However, if indeterminism is the case, there is room for physically and macroscopically realized information to bias events at the root-most physical level to turn out one way versus another. First, the evidence from physics overwhelmingly supports indeterminism at the root-most level of causation. The question then becomes whether the brain can harness microscopic indeterminism for its own macroscopic information-processing ends. I argue that the brain harnesses atomic-level randomness by amplifying this to a macroscopic level of neuronal spike timing; in a system built around temporal coincidence detection, randomizing timing has the effect of randomizing what information might meet informational criteria for firing. Second, when Neil says “no mechanism for top-down causation is provided” I can only reply with my book, which is all about a physical mechanism for top-down causation; Present physically realized informational criteria for neuronal firing influence which possible particle paths can become real by only allowing possible physical causal chains that are also informational causal chains to occur.

Here's what I mean when I say no mechanism has been provided, Peter. We agree that representational content supervenes on physical properties (of the brain, let's suppose - I actually have a broader conception of which physical properties can realize information, but that's irrelevant here). You, unlike me, think that the representational contents have causal powers. Moreover, you think that the causal powers are top down: from the mental to the physical. Now there is some sense in which I accept the claim that informational criteria influence which casual pathways become actual,since I believe that (thanks to evolution) some of the physical pathways in the brain are also mental; that is,that syntactical properties are mirrored in physical properties (that's a standard tenet of computationalism). But that's not top down causation: whether the mental is identical to, reducible to, or even irreducible to, the physical, its ordinary causation at the same level (mental-mental or physical-physical). That's sufficient to show that your arguments for irreducibility do not provide a mechanism for mental to physical causation. Much more is needed. I am quite prepared to countenance indeterminism in the brain (that's an open question, so far as I can see). And therefore I am quite prepared to grant you your starting position: physical causation leaves open a range of possibilities as to what state of the brain becomes actual. But I just don't see a mechanism for how the mental influences which of these states become actual, which isn't entirely accounted for by the computational tenet described above. In fact it is entirely mysterious how the mental influences which of the physical states that are possibilities become actual, except by the mirror of syntax in physical properties mentioned.

Thank you Peter (if I may)--that post was incredibly helpful in firming up the issues and my grasp of them.

And a quick comment on how token identity collapses into type identity, given transitivity. If M is token identical to P, where P is a type, then the argument from transitivity will go through. But if M is token identical to P, where P is a token mental state - that belongs to the class (say) believing that elephants are pink but is not identical to every token of that class - then token identity does not collapse.

Paul, like Tononi, I think that qualia should be identified with a class of neurally realized information. In the last chapter of my book I develop a theory of qualia that ties qualia very closely to volitional attention. Qualia include information that is currently attended (such as the visual figure) as well as information that could be volitionally attended in the next moment (such as the contents of the visual background). I argue that qualia comprise a 'precompiled' informational format (produced by preconsious processing) for volitional attentional operations, such as attentional tracking. Such operations facilitate planning and acting in the 'virtual world' that qualia comprise. This format evolved to be informative about states of the body and intrinsic properties of the real world, and evolved to be maximally 'chunked' so that attention could manipulate high-level rather than low-level informational structures.

I don't believe that Tononi's theory is a correct theory of the neural correlates of qualia. I find it to be an improbable version of pan-psychism dressed in the language of Shannon, according to which a beehive or even NYC would be conscious. To believers in phi or integrated information I would ask 'If NYC is really conscious, what is NYC conscious of?' The essence of the problem is that Tononi's idea of integrated information dispenses with decoding as essential to what information is. On his view, information simply is the occupying of some possible state in a system that can occupy many possible states. Tononi’s view, that a decoder is not needed to generate information, is problematic because information changes depending on how identical input is decoded. Indeed, one could reasonably argue that there is no information at all in, say, a free-floating strand of DNA or mRNA. Rather, DNA and mRNA contain the potential for information, relative to a given decoder that decodes it. But the potential for realizing information relative to a given decoder is not the same as actually realizing information via physical state changes of that decoder. For example, a ribosome that “reads” mRNA will build proteins. But relative to a different decoder, the same mRNA strand might be used to build cities or write sonnets. A central problem for any theory of information is that given any physical configuration, a physically realized decoder can be designed that does whatever you would like in response to that configuration. Fashion the right decoder, and even cracks on a burned tortoise shell can be thought to carry information about the future, the temperature on the sun, or anything else. Lucky for us, our decoders have evolved to lead to adaptive rather than maladaptive thoughts and actions in most cases for a given input. Against Tononi, I believe that we cannot dispense with the notion of decoding in thinking about how information is realized in neurons, including the special subclass of information that we can volitionally attend, namely qualia. NYC does not really have informational decoders if you don't count those subsystems, such as people, who have neuronal decoders. So on my account NYC or a beehive would not be conscious, though people experience qualia and bees might too.

Neil, if I understand what you are saying, it is that if information is realized in a physical substrate, as we agree it must be, assuming physical monism, then there is only physical causation among physical entities, and therefore information can play no causal role. In contrast, I argue that information can play a causal role even if always physically realized because it is actually realized in a pattern of spatiotemporal relationships among physical things that a decoder responds to, and that pattern is itself immaterial, not itself subject to the laws of physics, such as the conservation of energy. Once we open the door to causation as a succession of energetic patterns rather than a succession of energetic particles, which is the Newtonian colliding billiard balls conception, we can have supervening causal chains that are consistent with the laws of physics but are not reducible to the laws of physics. We can have a succession of patterns that realizes the rules of baseball, for example. Any particular game of baseball is realized in one particular succession of particle level states. But if indeterminism is the case, then practically infinitely many successions of particle states were possible. Why did just one of the subset of such particle-level successions occur that is also consistent with the rules of baseball? Because the people involved set top-down conditions on their actions, realized ultimately in particular synaptic weight settings, that would be consistent with the rules of baseball.

A metaphor: Imagine a magic cup that changes its substance, but not its shape once a second. First it is made of gold, then plastic, then marble, etc. Its shape is realized in different physical substrates, but itself remains constant. Shape, although realized in matter, does not itself obey the laws of matter. Shape can be created and destroyed, as when you remodel clay or drop a cup and break it. Conditions for actions can be set on the shape of the cup rather than its substance or mass or particle state. For example “If goblet, pour wine, if Bierstein, pour beer, if mug, pour coffee.” Such causation is indifferent to the particular particle state or even substance realizing the shape. This is the sense in which I think information is causal.

Can information remain constant even though the realizing brain state changes? Of course it can. If I look at a phone book and remember a phone number, say, “(212) WA3-6349” I will rehearse it by saying it over and over in my head as I walk to the phone. If the phone is far away, I might say it to myself 25 times. Each time I am in a different brain state, but the information remains the same. When I dial the phone, I dial it based on the information. Yes, there is always a physical realization of information, but its causal power is not by virtue of its mass or particle positions. Its causal power is by virtue of immaterial patterns, which can meet the criteria placed on patterns of input by one or more decoders of energetic patterns.

Patterns in input can be genuinely causal only if there are physical detectors, such as neurons, that respond to patterns in input and then change the physical system in which they reside if the criteria for the presence of a pattern in inputs have been met. Neurons that respond to patterns in input, such as temporal coincidences, that carry information about spatial and other patterns in the world, such as, say, the spatial pattern of the constellation Orion, can be thought to realize a downwardly causal physical process. The causal efficacy of pattern detection among neurons is fundamentally different from the spurious “downward causation” of a hurricane because pattern detection among neurons cannot be reduced to the localistic transfer of energy among colliding particles, as is the case of a hurricane. A neuron does not absorb or transfer energy like a billiard ball. It assesses its inputs for satisfaction of physical/informational criteria, which, if met, lead to output. These criteria do not assess the amount of energy in inputs or particle positions. They assess global patterns in input. One of the most important assessments is of the coincidence of input (i.e., arrival within the temporal integration window of neurons). By responding to patterns in energy rather than just amounts of energy, neurons permit spatiotemporal patterns of input to become causal in the universe. Without such a mechanism of pattern detection and the triggered physical response of firing, patterns in energy, like hurricanes, would only be epiphenomenally causal. But with a physical mechanism in place that responds to patterns in energy by generating a physical response, namely firing, within a cascade of such pattern detectors, patterns in energetic inputs become genuinely causal. Thus life, which I believe first introduced such causation among the phase or spatiotemporal relationships or patterns of energy, introduced a new type of physical causation into the universe that is best thought of as informational causation. It can also be thought of as phase or pattern causation more generally.

To deny information a causal role, as you and other reductionists appear to want to, is to, in my opinion, miss where the real causal action lies. If I convert the Neanderthal genome to letters, transmit these in pulses of light to a manned space-station on Mars, where astronauts then clone a baby Neanderthal, it is encoded and then decoded information that permits this to occur. Yes, the information was physically realized in radically different formats at each step along this physical causal chain, but what remained constant, and what was truly causal was the information involved. A huge change at the physical level, such as replacing every atom with a different atom of the same type, would make no difference to the outcome: a baby Neanderthal. But a tiny change at the informational level, say a point mutation in the code, could make the difference between a normal and highly abnormal or even dead baby. Among the veritable infinity of possible physical causal chains that could have occurred from beginning to end of this process, why did one occur that resulted in a healthy baby Neanderthal? Because physically realized informational criteria had to be met to release a physical action. This allowed only that minute corner of state space to occur which included physical causal chains that were also informational causal chains. The rest of physical state space was disallowed by the imposition of informational criteria. This is the sense in which I mean that information can be causally downward in its effects.

The comments to this entry are closed.

Books about Agency

3QD Prize 2014: Marcus Arvan