Comments

Friday, January 29, 2016

How I spent (some of) my winter vacation in Nijmegen

I was asked how it went in Nijmegen. Let me say a word or two, with the understanding that what I say here is very much a one sided perspective.

When I left for David’s lectures I was pretty confident that the GG linguistic worldview, which I take to be hardly worth contesting, is treated with great skepticism (if not worse, ‘naivety’ and ‘contempt’ are adjectives that come to mind) in the cog-neuro (CN) of language world. One prominent voice of this skepticism is Peter Hagoort, whose views concerning GG I critically discussed here. Before leaving, I was pretty confident that neither my views nor his had changed much so I was looking forward to a good vigorous back and forth. Here are the slides I presented. They were intended to provoke, though not because I thought that they were anything but anodyne intellectually speaking. The provocation would come from the fact that the truisms I was defending are barely understood by many in the “opposition.”

The presentation had four objectives:

1.     To note that Chomsky’s views are always worth taking seriously, the main reason being that he very often right
2.     To explain why it is virtually apodictic that
a.     Part of human linguistic capacity involves having an mind/brain internal G
b.     FL exists and that it had some linguistically specific structure (aka UG is not null)
3.     That cog-neuro of language investigators should hope like hell that something like the Minimalist program is viable
4.     To dispel some common misconceptions about GG widespread in the CNospehere

This, IMO, went well. I led with (1) in order to capture the audience’s attention (which, I believe I did). However, I really wanted to make points (2) and (3) in a way that a non-linguist could appreciate. To do this I tried to make a distinction between two claims: whether mental Gs and FL/UG exists in human minds/brains and what they actually look like. The first, I argued, cannot be contentious (i.e. that humans have mental grammars in the brain is a trivial truth). The second I noted must be (e.g. whether FL/UG has bounding nodes and a subjacency principle is an empirical issue). In other words, that humans have internal Gs and that humans have an FL with some linguistically specific properties is a virtual truism. What Gs and FL/UG looks like is an empirical question that both is and should be very contentious, as are theories in any domain.

And here was the main point: one should not confuse these two claims. We can argue about what human Gs look like and how FL is constituted. We cannot seriously argue about whether they exist incarnated in a region roughly between our ears.

Why are the points in (2) virtual truisms? For the reasons that Chomsky long ago noted (as I noted in (1), he is very often right). They are simple consequences of two obvious facts.

First, the fact of linguistic creativity: it is obvious that a native speaker can produce and understand an effective infinity of linguistic structures. Most of these structures are novel in that sense that speakers have never encountered them before. Nonetheless, these sentences/phrases etc. are easily produced and understood. This can only be explained if we at least assume that speakers who do this have an internalized set of rules that are able to generate the structures produced/heard. These rules (aka Gs) must be recursive to allow for the obvious fact of linguistic creativity (the only way to specify an infinite set is recursively). So given that humans display linguistic creativity and given that this evident capacity requires something like a G, we effortlessly conclude that humans have internal Gs. And assuming we are not dualists, then these Gs are coded somehow in human brains. The question is not whether this is so, but what these Gs look like and how brains code them.

The second evident facts is that humans have FLs. Why? Because 'FL' is the name we give to the obvious capacity that humans have to acquire Gs in the reflexive effortless way they do. To repeat the mantra: nothing does language like humans do language?  So, unless you are a dualist, this must mean that there is something special about us that allows for this. As this is a cognitive capacity, then the likely locus of difference between us and them lies in our brain (though, were it the kidney, liver or left toe that would be fine with me). ‘FL’ is the name of this something special. Moreover, it’s a pretty good bet that at least some of FL is cognitively specific to language because, as anyone can see (repeat mantra here) nothing does language like we do language. Ergo, we have something special that they do not. And this something special is reflected in our brains/minds. What that special thing is and how brains embody them remain difficult empirical questions. That said, that humans have FLs with linguistically specific UG features is a truism.

I believe that these two points got across, though I have no idea if the morals were internalized. Some remarks in the question period led me to think that people very often confuse the whether and what question. Many seem to think that accepting the trivial truth of (2) means that you have to believe everything that Chomsky has to say about the structure of Gs and FL. I assured the audience that this was not so, although I also mentioned that given Chomsky’s track record on the details it is often a good idea to listen carefully to what he has to say about these empirical matters. I believe that this surprised some who truly believe that GGers are mindlessly in thrall to Chomsky’s every word and accept it as gospel. I pleaded guilty. Thus, I assured them that though it was true that, as a matter of cognitive policy, I always try my hardest to believe what Chomsky does, my attitudes were not widely shared and are not considered prerequisites for good standing in the GG community. Moreover, sadly, even I have trouble keeping to my methodological commitment of intellectual subservience all the time.

I next argued that Minimalism (M) is not the bogeyman that so many non-linguists (and even linguists) think it is. In fact, I noted that cog-neuro types should hope like hell that some version of the program succeeds. Why? If it does it will make studying language easier in some ways. How so?

Well if M works then there are many parts of FL, the non-linguistically proprietary ones, that can be studied in animals other than humans. After all, M is the position that FL incorporates operations that are cognitively and/or computationally general, which means that they are not exclusive to humans. This is very different from earlier views of FL where a very large part of FL consisted of what looked like language specific (and hence human specific) structure.  As it is both illegal and rude to do to us what we regularly do to mice, if most of FL resides in us but not in them then standard methods of cog-neuro inquiry will be unavailable. If however, large aprts of FL are recycled operations and principles of a-linguistic cognition and/or computation (which is what M is betting) then we can, in principle, learn a lot about FL by studying non-human brains. What we cannot learn much about are the UG parts, for, by assumption, these are special to us. However, if UG is a small part of FL, this leaves many things to potentially investigate.

Second, I noted that if M gets anywhere then it promises to address what David Poeppel describes as the parts-list problem: it provides a list of basic properties whose incarnation in brains is worth looking for. In other words, it breaks linguistic competence down to manageable units. In fact, the fecundity of this way of looking at things has been exploited by some cog-nuero types already (e.g. Pallier et. al. and Friederici & Co) in their efforts to localize language function in the brain. It turns out that looking for Merge may be more tractable than looking for Raising. So, two nice consequences for cog-neuro of language should M prove successful.

I do not think that this line of argument proved to be that persuasive, but not rally because of the Mishness of the ideas per se. I think that the main resistance comes from another idea. There is a view out there that brains cannot track the kinds of abstract structures that linguists posit (btw, this is what made David's second lecture so important). Peter Hagoort in his presentation noted that brains do not truck in “linguaforms.” He takes the Kosslyn-Pylyshyn debate over imagery to be decisive in showing that brains don’t do propositions. And if they don’t then how can they manipulate the kinds of ling structures that GG postulates. I still find Hagoort’s point to be a complete non-sequitur. Even if imagery is non-propositional (a view that I do not accept actually) it does not follow that language is. It only follows that it is different. However, the accepted view as Hagoort renders it is that brain mechanisms in humans are not in any way different in kind from those in other animals and so if their brains don’t use linguaforms then neither can ours. I am very confident that our brains do manipulate linguaforms, and I suspect that theirs do to some extent as well.

What makes a brain inimical to linguaforms? Well basically it is assumed that brains have a neural net/connectionist architecture. IMO, this is the main stumbling block: CNers take all brain function to be a species of signal detection. This is what neural nets are pretty good at doing. There is a signal in the data, it is noisy and the brains job is to extract that signal from the data. GGers don’t doubt that brains do some signal processing, but we also believe that the brain also does information processing in Gallistel’s sense. However, as Gallistel has noted, CNers are not looking for the neural correlates required to make information processing possible. The whole view of the brain as a classical computing device is unpopular in the CN world, and this will make it almost impossible to deal with most of cognition (as Randy has argued), language being just a very clear hard example of the cognitive general case.

I was asked what kind of neuro experiment could we do to detect that the kinds of ling structure I believe to exist. Note, neuro experiments, not behavioral ones.  I responded that if CNers told us the neural equivalent say of a stack or of a buffer or of embedding I could devise an experiment or two. So I asked: what are the neural analogues of these notions? There was silence. No idea.

Moreover, it became pretty clear that this question never arises. Gallistel, it seems, is entirely correct. The CN community has given up on the project of trying to find how general computational properties are incarnated. But given that every theory of parsing/production that I know of is cast in a classical computational idiom, it is not surprising that GG stuff and brain stuff have problems making contact. CN studies brains in action. It cannot yet study what brains contain (i.e. what kinds of hard disks the brain contains and how info is coded on them). Until we can study this (and don’t hold your breath) CN can study language to the degree that it can study how linguistic knowledge is used. But all theories of how ling knowledge is used requires the arsenal of general computational concepts that Gallistel has identified. Unfortunately, current CN is simply not looking for how the brain embodies these, and so it is no surprise that making language and the brains sciences fruitfully meet is very hard. However, it's not language that's the problem! It is hard for CN to give the neural correlates of the mechanisms that explain how ants find their way home so the problem is not a problem of language and the brain, but cognition and the brain.

So, how did it go? I believe that I got some CNers to understand what GG does and dispelled some myths. Yes, our data is fine, no, we believe in meaning, yes Gs exist as does FL with some UG touches, no, everything is not in the signal… However, I also came away thinking that Gallistel’s critique is much more serious than I had believed before. The problem is that CN has put aside the idea that brains are information processing systems and sees them as fancy signal detection devices. And, until this idea is put aside and CN finds the neural analogues of classical computational concepts, mapping ling structure to neural mechanisms will be virtually impossible, not because they are linguistic but because they are cognitive. There is no current way to link linguistic concepts to brain primitives because brain primitives cannot do any kind of cognition at all (sensation yes, perception, partly, but cognition, nada). 

Where does that leave us? We can still look for parts of the brain that correlate with doing languagy things (what David Poeppel calls the map problem (see next post)), but if the aim is to relate brain and linguistic mechanisms, this is a long way off if we cannot find the kinds of computational structures and operations that Gallistel has been urging CN to look for.


So how did it go? Well, not bad. Nijmegen is nice. The weather was good. The food served was delicious and, let me say this loud and clear, I really enjoyed the time I spent talking with the CNers, especially Peter Hagoort. He likes a good argument and is really fun to disagree with (and that is easy to do given how wrong he is about things linguistic). So, it was fun. It may even have been productive. However, I doubt the lectures, excellent though they were, will mark a sea-change in ling-neuro interactions. I hope I am wrong, but I doubt it. We shall see.

Monday, January 25, 2016

Three pieces to look at

I am waiting for links to David Poeppel’s three lectures and when I get them I will put some stuff up discussing them. As preview: THEY WERE GREAT!!! However, technical issues stand in the way of making them available right now and to give you something to do while you wait I have three pieces that you might want to peak at.

The first is a short article by Stephen Anderson (SA) (here). It’s on “language” behavior in non-humans. Much of it reviews the standard reasons for not assimilating what we do with what other “communicative” animals do. Many things communicate (indeed, perhaps everything does as SA states in the very first sentence) but only we do so using a system that of semantically arbitrary structured symbols (roughly words) that it combines to generate a discrete infinity of meanings (roughly syntax). SA calls this, following Hockett, the “Duality of Patterning” (5):

This refers to the fact that human languages are built on two essentially independent combinatory systems: phonology, and syntax. On the one hand, phonology describes the ways in which individually meaningless sounds are combined into meaningful units — words. And on the other, the quite distinct system of syntax specifies the ways in which words are combined to form phrases, clauses, and sentences.

Given Chomsky’s 60 year insistence on the centrality of hierarchical recursion and discrete infinity as the central characteristic of human linguistic capacity, the syntax side of this uniqueness is (or should be) well known. SA usefully highlights the importance of combinatoric phonology, something that Minimalists with their focus on the syntax to CI mapping may be tempted to slight. Chomsky, interestingly, has focused quite a lot on the mystery behind words, but he too has been impressed with their open textured “semantics” rather than their systematic AP combinatorics.[1] However, as SA notes, the latter is really quite important.

It is tempting to see the presence of phonology as simply an ornament, an inessential elaboration of the way basic meaningful units are formed. This would be a mistake, however: it is phonology that makes it possible for speakers of a language to expand its vocabulary at will and without effective limit. If every new word had to be constructed in such a way as to make it holistically distinct from all others, our capacity to remember, deploy and recognize an inventory of such signs would be severely limited, to something like a few hundred. As it is, however, a new word is constructed as simply a new combination of the inventory of familiar basic sound types, built up according to the regularities of the language’s phonology. This is what enables us to extend the language’s lexicon as new concepts and conditions require. (5)

So our linguistic atoms are peculiar not only semantically but phonetically as well. This is worth keeping in mind in Evolang speculations.

So, SA reviews some of basic ways that we differ from them when we communicate. It also ends with a critique of the tendency to semanticize (romanticize the semantics of) animal vocalizations. SA argues that this is a big mistake and that there is really no reason to think that animal calls have any interesting semantic features, at least if we mean by this that they are “proto” words. I agree with SA here. However, whether I do or not, if SA is correct, then it is important for there is a strong temptation (and tendency) to latch onto things like monkey calls as the first steps towards “language.” In other words, it is the first refuge of those enthralled by the “continuity” thesis (see here). It is thus nice to have a considered take down of the first part of this slippery slope.

There’s more in this nice compact little paper. It would even make a nice piece for a course that touches on these topics. So take a look.

The second paper is on theory refutation in science (here). It addresses the question of how ideas that we take to be wrong are scientifically weeded out. The standard account is that experiments are the disposal mechanism. This essay, based on the longer book that the author, Thomas Levenson has written (see here), argues that this is a bad oversimplification. The book is a great read, but the main point is well expressed here. It explains how long it took to loose the idea that Vulcan (you know Mr Spock’s birthplace) exists. Apparently, it took Einstein to kill the idea. Why did it take so long? Because, that Vulcan existed was a good idea that fit well with Newton’s ideas and that it experiment had a hard time disproving. Why? Because small modification of good theories are almost always able meet experimental challenges, and when there is nothing better on offer, such small modifications of the familiar are reasonable alternatives to dumping successful accounts. So, naive falsificationism (the favorite methodological stance of the hard headed, non nonsense scientist) rails to describe actual practice, at least in serious area of inquiry.

The last paper is by David Deutsch (here). The piece is a critical assessment of “artificial general intelligence” (AGI). The argument is that we are very far from understanding how thought works and that the contrary optimism that we hear from the CS community (the current leaders being the Bayesians) is based on an inductivist fallacy. Here’s the main critical point:

[I]t is simply not true that knowledge comes from extrapolating repeated observations. Nor is it true that ‘the future is like the past’, in any sense that one could detect in advance without already knowing the explanation. The future is actually unlike the past in most ways. Of course, given changes’ in the earlier pattern of 19s are straightforwardly understood as being due to an invariant underlying pattern or law. But the explanation always comes first. Without that, any continuation of any sequence constitutes ‘the same thing happening again’ under some explanation.

Note, the last sentence is the old observation about the vacuity of citing “similarity” as an inductive mechanism. Any two things are similar in some way. And that is the problem. That this has been repeatedly noted seems to have had little effect. Again and again the idea that induction based on similarity is the engine that gets us to generalizations we want keeps cropping up.  Deutsch notes that is still true with our most modern thinkers on the topic.

Currently one of the most influential versions of the ‘induction’ approach to AGI (and to the philosophy of science) is Bayesianism, unfairly named after the 18th-century mathematician Thomas Bayes, who was quite innocent of the mistake. The doctrine assumes that minds work by assigning probabilities to their ideas and modifying those probabilities in the light of experience as a way of choosing how to act. … As I argued above, that behaviourist, input-output model is appropriate for most computer programming other than AGI, but hopeless for AGI. It is ironic that mainstream psychology has largely renounced behaviourism, which has been recognised as both inadequate and inhuman, while computer science, thanks to philosophical misconceptions such as inductivism, still intends to manufacture human-type cognition on essentially behaviourist lines.

The only thing that Deutsch gets wrong in the above is the idea that main stream psych has gotten rid of its inductive bias. If only!

The piece is a challenge. I am not really fond of the way it is written. However, the basic point it makes is on the mark. There are serious limits to inductivism and the assumption that we are on the cusp of “solving” the problem is deserving of serious criticism.

So three easy pieces to keep you busy. Have fun.



[1] I put ‘semantics’ in scare quotes because of Chomsky does not think much of the idea that meaning has much to do with reference. See here and here for some discussion.

Tuesday, January 19, 2016

Cogneuro cross training; the Gallistel method

I am getting ready to fly to the Netherlands where I am going to defend Generative Grammar’s (GG) neuro-cognitive relevance. The venue? David Poeppel has been invited to give three lectures on brain and language (see here (neat poster btw)) and I have been invited to comment on the third, Peter Hagoort being the other discussant. The lectures are actually billed as “neurobiological provocations” and David thought that I fell snugly within the extension of the nominal predicate. Given Peter Hagoort’s published views (see here for link and discussion) about GG and his opinion that it has lost its scientific mojo, I suspect (and hope) that the exchange will be lively. The position that I will argue for is pretty simple. Here are the main points:

1.     The following two central claims of GG are, conceptually, near truisms (though, sadly, not recognized as such):
a.     Grammars (G) are real mental objects
b.     FL exists and has some linguistically proprietary structure
2.     At least one defining feature of FL/UG is that it licenses the construction of Gs which generate objects with unbounded hierarchical complexity (aka: recursion).
3.     Most versions of GG identify (more or less) the same kinds of G objects and invoke the same kinds of G principles and operations.
4.     Contrary to received wisdom, GG does not change its theoretical character every 15 minutes. In fact, the history of theory change in GG has been very conservative with later theoretical innovations retaining most of the insights and generalizations of the prior stage.
5.     It’s a big mistake to confuse Greenberg vs Chomsky universals.
6.     Linguistic data is, methodologically speaking, almost as pure as the driven snow (Yay for Sprouse, Almeida, Schutze, Phillips and a host of others). There is nothing wrong with more careful vetting of the data except that most of the time it’s a pointless expenditure of effort (i.e. little marginal return in insight for the extra time and money) whose main objective seems to be to make things look “sciency” (as in “truthy”).
7.     The autonomy of syntax thesis does not mean that GG eschews semantics. It is in fact, another, almost certainly, truistic claim about the structure of human Gs (viz. that syntactic structure is not reducible to phonetic or semantic or informational or communicative structure).
8.     GG recognizes that there is G variation and has things to say about it.
9.     Studying linguistic communication is certain to be much harder than studying G competence precisely because the former presupposes some conception of the latter. Gs have a hope of being natural kinds whereas communication is certainly a massive interaction effect and hence will be very hard to study. Disentangling interaction effects is a real pain, and not only in the cog-neuro of language!

That’s what I will be saying, and given Hagoort’s diametrically opposite views on many of these matters, the discussion should be (ahem) lively. However, I have promised to be on my best behavior and given that I believe it to be very important for linguistics for cog-neuro to appreciate how much GG has to offer I am going to play as nicely as I know how, all the while defending a Rationalist conception of FL/UG and the Gs that FL generates.

I admit that I have been training a bit for this event. My preparation has mainly consisted of re-reading a bunch of papers, and aside from getting me re-focused on some relevant issues, this has also allowed me to re-engage with some really terrific stuff. I want to especially mention a terrific paper by Randy Gallistel and Louis Matzel (G&M) (here). I have noted it in other posts, but I don’t think that I ever discussed it in any detail. I want to somewhat rectify that oversight here.

IMO, the paper is indispensible for anyone interested in why neuroscience and cognition have mixed about as well as oil and water. What makes G&M so good? It argues that the problems stems from the deep-seated Empiricism of contemporary neuroscience. This Empiricist bias has largely prevented neuroscience from even asking the right kinds of questions, let alone providing real insights into how brains embody cognition. A commitment to an Empiricist Associationist psychology has blinded neuroscience from interesting questions. Moreover, and this is what is most interesting in G&M, Empiricist blinders have prevented neuroscience from noticing that there is little cognitive evidence in favor of its pet theory of the brain and no neuro evidence for it either. This, G&M argues, has been obscured by a kind of unfortunate intellectual two step: psychologists believe that some of the best evidence for Associationsim comes from neuroscience and neuroscience thinks that some of the best evidence for it comes from psychology. In other words, there is a reinforcing delusion in which associationist learning and synaptic plasticity take in one another’s dirty laundry and without doing any vigorous washing or even mild rinsing conclude that the two dirty piles are together crisp and clean. G&M argues that this is fundamentally wrong-headed. Here are the basics of the argument.

G&M identifies two broad approaches to learning and memory. The first is the complex of associationism plus neural nets with Hebbian synapses (ANN) (“what fires together wires together”):

In the associative conceptual framework, the mechanism of learn-
ing cannot be separated from the mechanism of memory expression. At the psychological level of analysis, learning is the formation of associations, and memory is the translation of that association into a behavioral change. At the neuroscientific level of analysis, learning is the rewiring of a plastic nervous system by experience, and memory resides in the changed wiring. (170)

This contrasts with an information processing (IP) approach:

[In] the information-processing perspective, learning and memory are distinct mechanisms with different functions: Learning mechanisms extract potentially useful information from experience, while memory carries the acquired
information forward in time in a computationally accessible form that is acted upon by the animal at the time of retrieval (Gallistel & King 2009). (170)

G&M notes another critical difference between the two approaches:

The distinction between the associative and information-processing frameworks is of critical importance: By the first view, what is learned is a mapping from inputs to outputs. Thus, the learned behavior (of the animal or the network, as the case may be) is always recapitulative of the input-output conditions during learning:
An input that is part of the training input, or similar to it, evokes the trained output, or an output similar to it. By the second view, what is learned is a representation of important aspects of the experienced world. This representation
supports input-output mappings that are in no way recapitulations of the mappings (if any) that occurred during the learning. (170)

It is not, then, an accident that so many Associationists have a problem distinguishing a model of the data from a model of the system that generates the data. For an ANN modeling the data is modeling the system, as the latter is just a way of modeling the I/O relations in the data. The brain for ANNers captures the generalizations in the data and more or less faithfully encodes these. Not so for an IPer.

And this leads to a host of other important differences. Here’s two that G&M makes much of:

1.     ANN approaches eschew “representations” and, as such, are domain general
2.     IP theories are closely tied to the computational theory of mind and this “leads to the postulation of domain-specific learning mechanisms because no general-purpose computation could serve the demands of all types of learning” (175).

Thus representations, computation and domain specificity are a natural triad and forsaking one leads naturally to rejecting all. There really is little middle ground, which is precisely why the Empiricism/ Rationalism divide is so deep and consequential.

However, interesting though this discussion is, it is not what I wanted to focus on. For me, the most interesting feature of G&M is its critique of Hebbianism (i.e. the fire-wire pairing of synapses), the purported neural mechanism behind ANN views of learning.

The main process behind the Hebbian synapse is a process known as “long term potentiation” (LTP). This is the process wherein inputs modify transmission between synapses (e.g. increase amplitude and/or shorten latencies) and this modification is taken to causally subvene associative learning. In other words, association is the psychology of choice because synapses reorganize LTPishly thereby closely tracking the laws of association (i.e. “properties of LTP aligned closely with those of the associative learning process as revealed by behavioral experimentation” (171)).

This putative relation between the association and LTP biology has been one of the main arguments for ANN. Thus connectionist neural net models not only look “brainy”, they actually work like brains do! Except, as G&M shows, they don’t really, as “the alignment” between LTP mechanisms and association measured behaviorally “is poor” (171).

How poor is the fit? Well G&M argues that the details of the LTP process lines up very badly the associationist ones over a host of dimensions. For example, associationist and LTP time scales are vastly different, a few milliseconds for LTP versus (up to) hours for associations. Moreover, whereas LTP formation cares about inter-stimulus intervals (how close the relevant effects are in time to one another) associations don’t. They care about ratios of conditioned and unconditioned stimuli pairs (i.e. the CS-US ratio being smaller than the US-US ratio). In sum, as regards “timing,” LTP growth and behavioral association formation are very different.

Moreover, as G&M notes, this is widely recognized to be the case (172) but despite this the conclusion that association laws supervene on LTP biology is not called into question. So, the disconnect is acknowledged but no consequences for ANN are drawn (see p 172-3 for discussion). Defenders of the link rightly observe that the fact that LTP and association formation don’t track one another does not imply that they are not intimately linked. True, but irrelevant. The issue is not whether the two might be related but whether they indeed are and the fact that they don’t track one another means that the virtues of either cannot rebound to the benefit of the other. In fact, as I note anon, things are worse than this.

But first, here are some other important differences G&M discusses:

·      “Behaviorally measured associations can last indefinitely, whereas LTP always decays and usually does so rapidly” (172).
·      “Both forgotten and extinguished conditioned responses exhibit facilitated reacquisition; that is, they are relearned more efficiently than when they were initially acquired” whereas “following its decay to baseline LTP is neither more easily induced nor more persistent than it was after previous inductions” (172).

G&M provide a handy table (174) enumerating the ways that LTP and associations fail to track one another. Suffice it to say, that the two mechanisms seem to be very different and how LTP biology is supposed to support associations is a mystery. And I mean this literally.

Rather than draw the problematic conclusion that there is little evidence that Hebbian synapses can causally undergird associative learning, ANNers appeal to emergent properties of the network rather than LTP operations at the synapses to provide the neural glue for associationsim (see p.173). As G&M note, this is mysterianism, not science.  And though I sympathize with the view that we may never understand how the brain does what it does, I don’t consider this a breakthrough in neuroscience. The following is a funny kind of scientific argument to make in favor of ANN: though the main causal mechanisms for learning are association via Hebbian synapses we cannot understand this at the micro level of associations and synapses but only at the macro level of whole brains. Brains are where the action is, in virtue of the structures of their synapses but how the synapses do this will be forever shrouded in mystery. So much for the virtues of analysis. I love it when hard-headed Empiricism saves itself by flights of mystical fancy.

There is a second line of argument. G&M shows that classical associationist effects require the calculation of intervals (numbers coding for duration) and that Hebbian based neural nets can’t code this kind of info in a usable form (172). As G&M puts it:

“[T]he mechanism that mediates associative learning and memory must be able to encode the intervals between events in a computationally accessible form. There is no hypothesis as to how this could be accomplished through the modification of synaptic transmission” (172).

So, the temporal properties don’t fit together and the basic facts about classical conditioning invoke information that cannot be coded in a set of synapses in terms of LTP. It appears that there really is no there there. What we are left with are arguments from pictograms: ANN stories make for nice pictures (i.e. neural nets look so brainy and synapsy and connectionist nets “learn” so well!) but as the putative fit between neural mechanism and behavioral pattern is very poor (as G&M shows and, it seems, is largely conceded) there is no good biological reason for holding onto associations and no good psychological reason for embracing neural nets. Time to move on.

The rest of G&M outlines what kinds of stories we should expect from an IP cog-neuroscience. Surprise surprise: we are waist deep in representations from the get-go. We hare awash with domain specific computations and mechanisms. In other words, we get what looks to be a Rationalist conception of the mind/brain, or as G&M puts it, “a materialist form of Kantian rationalism” (193). A place, I would add, where GG of the Chomsky variety should feel very much at home. In other words, the problems that neuroscience claims to have with GG is more indicative of problems with the current state of the brain sciences than problems with GG. GG is not perfect (ok, I am kidding, it is) but there is little reason to believe that what we know about the brain (which is quite limited) precludes accounts of the GG variety, contrary to ANN doctrine.

Conclusion? Time to assign ANN to the trash bin of ideas that looked nice but were entirely off track. The sooner we do this, the sooner we can start addressing the serious problems of relating minds to brains G&M list a bunch on p. 175. We have a long way to go. And maybe I can make this point in Nijmegen too, but nicely, very nicely.