Read The Science of Language Online

Authors: Noam Chomsky

The Science of Language (6 page)

BOOK: The Science of Language
9.62Mb size Format: txt, pdf, ePub
ads
6
Parameters, canalization, innateness, Universal Grammar
 
JM:
Still in the vein we've been talking about, I'd like to ask about linguistic development (language growth) in the individual. You've employed the concept of – or at least alluded to the concept of – canalization, C. H. Waddington's term from about
fifty or sixty years ago, and suggested that the linguistic development of the child is like canalization. Can parameters be understood as a way of capturing canalization?
NC:
Canalization sounds like the right idea, but as far as I know, there are not a lot of empirical applications for it in biology.
 
With regard to parameters, there are some basic questions that have to be answered. One question is: why isn't there only a single language? Why do languages vary at all? So suppose this mutation – the great leap forward – took place; why didn't it fix the language exactly? We don't know what the parameters are, but whatever they are, why is it these, and not those? So those questions have got to come up, but they are really at the edge of research. There's a conceivable answer in terms of optimal efficiency –
efficiency of computation. That answer could be something like this, although no one's proposed it; it's really speculation. To the extent that biology yields a single language, that increases the genetic load: you have to have more genetic information to determine a single language than you do to allow for a variety of languages. So there's kind of a saving in having languages not be too minimal. On the other hand, it makes acquisition much harder: it's easier to acquire a minimal language. And it could be that there's a mathematical solution to this problem of simultaneous maximization: how can you optimize these two conflicting factors? It would be a nice problem; but you can't formulate it.
And there are other speculations around; you've read Mark Baker's book (
Atoms of Language
), haven't you?
JM:
Yes, I have
.
NC:
. . . well, there's this nice idea that parameters are there so we can deceive each other . . .
JM:
. . . and use that option in wartime.[C]
NC:
Of course, the understanding of what parameters are is too rudimentary to try to get a principled answer. But those questions are going to arise.
Take phonology. It's generally assumed – plausibly, but not with any direct evidence – that the mapping from the narrow syntax to the semantic interface is uniform. There are lots of theories about it; but everyone's theory is that this is the way it works for every language – which is not unreasonable, since you have only very limited evidence for it. The narrow syntax looks uniform up to parameters. On the other hand, the mapping to the sound side varies all over the place. It is very complex; it doesn't seem to have any of the nice computational properties of the rest of the system. And the question is why. Well, again, there is a conceivable snowflake-style answer, namely, that whatever the phonology is, it's the optimal solution to a problem that came along somewhere in the evolution of language – how to externalize this internal system, and to externalize it through the sensory-motor apparatus. You had this internal system of thought that may have been there for thousands of years and somewhere along the line you externalize it; well, maybe the best way to do it is a mess. That would be the nicest answer, although it's a strange thought for me. And you can think of long-term questions like that all along the line.
JM:
Would optimization be required for the conceptual-intentional case?
NC:
That is really a puzzle. So, why do our concepts
always have this invariant, curious property that they conform to our “cognoscitive powers” to use Ralph Cudworth's terminology, not to the nature of the world? It's really strange. And it seems to be completely independent. There are no sensible origins, selectional advantages, nothing . . .
JM:
You've often emphasized the importance of
poverty of stimulus facts with respect to knowledge of all aspects of language – namely, structural, phonological-phonetic, and meaning-related conceptual ways. You have pointed out that the facts demand explanation, and that the theory of Universal Grammar is an hypothesis, perhaps the only viable one, that explains these particular facts. Could you speak to what – given the current understanding of UG and of computation in it – the innateness of these domains amounts to?
NC:
First of all, I should say – I see this now clearly in retrospect – that it was a tactical mistake to bring up the issue of the poverty of the
stimulus. The reason is that it makes it look as if it's only about language, but it's a universal property of growth. The fact that we have arms and legs is a poverty of stimulus property – nutrition didn't determine them. So any aspect of growth – physical, cognitive, whatever – is going to have poverty of stimulus issues. And, at least in the sciences – it's not God, or something – it's universally assumed that it has to do with genetic endowment. So presumably the case of language has to do with genetic endowment. That's Universal Grammar as it has often been conceived.
Now actually, that's wrong, because it's not due to genetic endowment; it's due to genetic endowment plus laws of the way the world works. Nobody knows how it works, but it's taken for granted by serious biologists in the mainstream that some kinds of developmental constraints or architectural factors play a crucial
role in growth, and also in evolution – in both forms of development. Some notion of evolution and growth, which in genetic cases aren't so far apart – they're going to play a role. So you really have two factors to consider – or rather, three factors. Experience is going to make some choices.
Universal Grammar or genetic endowment will set constraints. And the developmental constraints – which are independent of language and may be independent of biology – they'll play some role in determining the course of growth. The problem is to sort out the consequences of those factors.
Well, what's Universal Grammar? It's anybody's best theory about what language is at this point. I can make my own guesses. There's the question of lexical items – where they come from. That's a huge issue. Among the properties of lexical items, I suspect, are the parameters. So they're probably lexical, and probably in a small part of the lexicon. Apart from that, there's the construction of expressions. It looks more and more as if you can eliminate everything except just for the constraint of
Merge. Then you go on to sharpen it. It's a fact – a clear fact – that the syntactic objects you construct have some information in them relevant to further computation. Well, optimally, that information would be found in an easily discoverable, single element, which would be, technically, its label. The labels are going to have to come out of the lexicon and be carried forward through the computation; and they should contain, optimally, all the information relevant for further computation. Well, that means for external Merge, it's going to involve selectional properties – so, where does this thing fit the next thing that comes
along? For internal Merge, what it looks like – kind of what you would expect in that domain – is that it's the probe that finds the input to internal Merge and sticks it at the edge because you don't want to tamper with it, just rearrange. Well, that carries you pretty far, and it takes you off to features; what are they, where do they come from, and so on . . .[C]
JM:
Noam, that's all the time for today. Thank you very much . . .
JM:
[Discussion continues] To pick up on an issue from the last session, we had been discussing innateness and I think we had come to an understanding to the effect that with lexical concepts we have no clear idea of what it means for them to be innate, but they are
.
NC:
Part of the reason for that – for not knowing what it is for them to be innate – is that we don't have much idea what they are.
JM:
Yes. Going further down the list into areas where we have a bit more confidence that we know what is going on, we had come to understand that with regard to
structural features the best way to understand innateness now is probably largely in terms of Merge, that is, a conception of language that focuses on the idea that most of the structure of language is somehow due to this introduction of Merge some fifty or sixty thousand years ago. Is that plausible?
NC:
Well, that is very plausible. How much of language that accounts for we don't really know – basically, finding that out is the Minimalist
Program: how much is accounted for by this one innovation? On Merge itself, every theory agrees; if you have a system with infinitely many hierarchically organized expressions, you have Merge or something equivalent, at the very least, whatever the formulation is. We just take for granted that Merge came along somewhere, and you can more or less time it. Then the question is, given that and the external conditions that language has to meet – interface conditions and independent properties of organisms, or maybe beyond organisms (physical laws and so on and so forth) – how much of language is determined by them? That's a research question – a lot more so than I would have guessed ten years or so ago.
JM:
OK, continuing on the list, what about
phonological and phonetic features and properties?
NC:
Well, there's no doubt that there's a specific array of them and you can't just make up any one. And they are plainly attuned to the
sensory-motor apparatus[; they meet interface conditions without, of course, being ‘about’ them]. In fact, the same is true if you use a different modality like sign: what you do is attuned to the sensory-motor apparatus; it [sign] doesn't use phonetic features, but some counterpart. The same kinds of questions arise about them as about lexical concepts. It's just that they – the phonetic features – are easier to study. Not that it's easy. Here at MIT there has been half a century of serious work with high-tech equipment trying to figure out what they are, so it doesn't come easily; but at least it's a much more easily formulable problem. Also, on the sensory-motor side, you can imagine comparative evolutionary evidence. On the lexical-semantic side, you can't even think of any
comparative evidence that works. But [on the sensory-motor side] other organisms have sensory-motor systems; they're very much like ours, it appears. So you might be able to trace origins. That's the usual hard problem with evolutionary theory. So far as we know, most of those are
precursors of language. It's possible that there's adaptation of the sensory-motor system to language – that's likely – but just what it is is very hard to say.
JM:
Is there evolutionary evidence from other primates for sensory-motor systems, or primarily from other creatures?
NC:
Other primates? Well, they have tongues and ears, and so on, but it's . . .
JM:
Not the famous dropped larynx
.
NC:
Well, they don't have the dropped larynx, but other organisms do – they've been found in deer, I think (Fitch & Reby
2001
); but that doesn't seem critical. It's not very clear what difference it would make. You wouldn't be able to pronounce some sounds but you'd be able to pronounce others. But humans learn language and use it freely with highly defective sensory-motor systems, or no control of the sensory-motor system at all. That's one of the things that Eric Lenneberg found –
discovered, actually – fifty years ago. [He discovered] that
children with dysarthria [no control over their articulatory systems] – who were thought not to have language by the people raising them, training them, etc – he discovered they did. He discovered this by standing behind them and saying something and noticing their reactions. There's more recent work. So you don't require – in fact you don't even have to use it; sign language doesn't use it – so it's very hard to see that there could be any argument from sensory-motor evidence for not developing language. But also the system seems to have been around for hundreds of thousands of years, as far as we can tell from fossil evidence. But there's no indication of anything like language use or the whole set of cognitive capacities that appear to have developed along with it.
Think about it in plain evolutionary terms. Somewhere along the line a mutation took place that led to the rewiring of the brain to give you
Merge. That everyone should accept, whether they like to say it or not. Well, the most parsimonious assumption is that that's all that happened. It's probably not that [alone]; but we have no evidence against it. So unless there's some evidence to the contrary, we sort of keep to that and see how far we can go. Well, mutations take place in an individual, not in a society, so what must have happened at some point is that that mutation took place in one person and then it would be transferred to offspring, or some offspring, at least. It was a pretty small breeding group. So it could be that if it gave a selectional advantage, they'd dominate the breeding group pretty soon, maybe in a few generations.
This could all be done without any communication. It gives you the ability to think, to
construct complex thoughts, to plan, to interpret . . . It's hard to imagine that that wouldn't yield a selectional advantage, so it could be that over some fairly short time, throughout this breeding group, that the capacity to think was well
embedded. The use of it to communicate could have come later. Furthermore, it looks peripheral: as far as we can see from studying language, it doesn't seem to affect the structure of language very much. And it does appear to be largely modality-independent. [No doubt] there are advantages to sound over sight – you can use it in the dark and it goes around corners – things like that. But it's quite possible that it's just a later development that came along, and it may not have had much effect on the structure of language.[C]
BOOK: The Science of Language
9.62Mb size Format: txt, pdf, ePub
ads

Other books

High Fidelity by Nick Hornby
Hearts Attached by Scarlet Wolfe
Shadow Wolf by Jenna Kernan
Squirrel Cage by Jones, Cindi
Femme Fatale by Cindy Dees
The First Technomancer by Rodney C. Johnson