This comment is global-warming-denialism levels of stupid. Iām honestly shocked.
LLMās have no such implications for the field of linguistics. Theyāre barely relevant at all.
Do I really need to point out that human beings do not learn language the way LLMs ālearnā language? That human beings do not use language the way LLMās use language? Or that human beings are not mathematical models. Not even approximately. I fucking hate this timeline.
Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, āThe apple falls.ā That is a description. A prediction might have been the statement āThe apple will fall if I open my hand.ā Both are valuable, and both can be correct. But an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like āAny such object would fall,ā plus the additional clause ābecause of the force of gravityā or ābecause of the curvature of space-timeā or whatever. That is a causal explanation: āThe apple would not have fallen but for the force of gravity.ā That is thinking.
Iām not saying this from a defending LLMs point, I genuinely think they are a waste of so many things in this world and I canāt wait for this hype cycle to be over.
However, there is a lot of research and backing behind statistical learning in language acquisition, this is specifically the research subject of my friends. Itās a very big thing in intervention for delays in language.
It is opposed to Chomskyās innate language theory, which at this point I think almost any linguist or language/speech sciences researcher would tell you isnāt a well accepted theory (at least as a holistic explanation, certainly it could still be true to an extent and a part of other systems).
tl;Dr LLMs are stupid, but itās not broadly true that the way they ālearn languageā is entirely different from how humans do. The real difference is that they fail to actually learn anything even when imitating humans.
Be that as it may, we arenāt getting any answers from LLMās. And given that Universal Grammar was the dominant view for so long, the jury is still out on a viable alternative.
This would be true if chomskys claim was that he was simply studying human language acquisition and that machines are different, but his claim was that machines canāt learn human languages because they donāt have some intuitive innate grammar.
Saying an llm hasnāt learned language becomes harder and harder the more you talk to it and the more it starts walking like a duck and quacking like a duck. To make that claim youāll need some evidence to counter the demonstrable understanding the llm displays. Chomsky in his nytimes response just gives his own unprovable theories on innate grammar and some examples of questions llms ācanāt answerā but if you actually ask any modern llm they answer them fine.
You can define ālearningā and āunderstandingā in a way that excludes llms but youāll end up relying upon unprovable abstract theories until you can come up with an example of a question/prompt that any human would answer correctly and llms wonāt to demonstrate that difference. I have yet to see any such examples. Thereās plenty of evidence of them hallucinating when they reach the edge of their understanding, but that is something humans do as well.
Chomsky is still a very important figure and his work on politics with manufacturing consent is just as relevant as when it was written over 20 years ago. His work on language though is on shaky grounds and llms have made it even shakier.
Do you really think Chomskyās UG hypothesis from half a century ago was formulated to deny that some dumb mathematical model would be able to simulate human speech?
Almost nothing youāve written has any grounding in empirical reality. You have a sentence that reads something like āthe more you talk to LLMās the harder it is to deny that they can use and understand language.ā
You might as well say that the longer you stare at a printed painting, the harder it is to deny that printers make art. LLMās do not āunderstandā their outputs or their inputs. If we feed them nonsense, they output nonsense. Thereās no underlying semantics whatsoever. An LLM is a mathematical model.
I know it looks like magic, but itās not actually magic. And even if it were, it would have nothing to do with linguistics, which is concerned with how humans, not computers, understand and manipulate language. This whole ridiculous conversation is a non-sequitur.
You are like, science denial-ism level of ignorance when it comes to this conversation, or perhaps, it might be that you donāt actually understand the underlying philosophy of scientific inquiry to understand why LLMs basically were able to break the back of both UH and innate acquisition.
You seem like the kind of person who cheer-leads āin the spirit of scienceā but doesnāt actually engage in it as a philosophical enterprise. You didnāt seem to notice the key point that @[email protected] made, which is that unlike UG, LLMās are actually testable. Thatās the whole thing right there, and if you donāt get the difference, thatās fine, but it speaks to your level of understanding to how one actually goes about conducting scientific inquiry.
And if you want to talk about incurious:
You might as well say that the longer you stare at a printed painting, the harder it is to deny that printers make art. LLMās do not āunderstandā their outputs or their inputs. If we feed them nonsense, they output nonsense. Thereās no underlying semantics whatsoever. LLMās are a mathematical model.
Specifically āThereās no underlying semantics whatsoeverā is the key lynch pin that UG demands that LLMās demonstrate are not strictly necessary. Its exactly why Chomskys house of cards crumbles with the the counter-factual to UG/ Innate acquisition that LLMās offer. I had a chance to ask him this question directly about 6 months prior to that op-ed being published. And he gave a response thatās about as incurious about why LLM, and basically, big complex networks in general, are able to learn as youāve offered here. His was response was basically the same regurgitation on UG and innate acquisition that he offers in the op-ed. And the key point is that yes, LLMās are just a big bucket of linear algebra; but they represent an actually testable instrument for learning how a language might be learned. This is the most striking part of Chomskys response and I found it particularly galling.
And it is interesting that yes, if you feed LLMās transformers (Iām going to start using the right term here: transformers) unstructured garbage, you get unstructured garbage out. However, if there is something there to learn, they seem to be at least some what effective at finding it. But that occurs in non-language based systems as well, including image transformers, transformers being used to predict series data like temperature or stock prices, even even DNA and RNA sequences. Weāre probably going to be having transformers capable of translating animal vocalizations like whale and dolphin songs. If you have structured series data, it seems like transformers are effective at learning patterns and generating coherent responses.
Hereās the thing. Chomsky UG represented a monolith in the world of language, language acquisition and learning, and frankly, was an actual barrier to progress in the entire domain, because we now have a counter factual where learning occurs and neither UG or innate acquisition are necessary or at all relevant. Its a complete collapse of the ideas, but its about as close as weāll get because at least in one case of language acquisition, they are completely irrelevant.
And honestly, if you canāt handle criticism of ideas in the sciences, you donāt belong in the domain. Breaking other peoples ideas is fundamental the process, and its problematic when people assume you need some alternative in place to break someone elses work.
the key lynch pin that UG demands that LLMās demonstrate are not strictly necessary
You know what, Iām going to be patient. Letās syllogize your argument so everyone can get on the same page, shall we.
LLMās have various properties.
???
Therefore, the UG hypothesis is wrong.
This argument is not valid, because itās missing at least one premise. Once you come up with a valid argument, we can debate its premises. Until then, I canāt actually respond, because you havenāt said anything substantive.
The mainstream opinion in linguistics is that LLMās are mostly irrelevant. If you believe otherwise ā for instance, that LLMās can offer insight into some abstract UG hypothesis about developmental neurobiology ā explain why, and maybe publish your theory for peer review.
You donāt need to put project a false argument onto what I was saying.
Chomskyās basic arguments:
1: UG requires understanding the semantic roles of words and phrases to map syntactic structures onto semantic structures.
2: UG posits certain principles of grammar are universal, and that syntactic and semantic representation is required as meaning changes with structure. The result is semantic universals - basic meanings that appear across all languages.
3: Semantic bootstrapping is then invoked to explain where children using their understanding of semantic categorizes to learning syntactic structures of language.
LLMās torpedo all of this as totally unnecessary as fundamental to language acquisition, because they offer at least one example where none of the above need to be invoked. LLMās have no innate understanding of language; its just pattern recognition and association. In UG semantics is intrinsically linked to syntactic structure. In this way, semantics are learned indirectly through exposure, rather than through an innate framework. LLMās show that a UG and all of its complexity is totally unnecessary in at least one case of demonstrated language acquisition. Thatās huge. Its beyond huge. It gives us a testable, falsifiable path forwards that UG didnāt.
The mainstream opinion in linguistics is that LLMās are mostly irrelevant.
Largely, because Chomsky. To invoke Planckās principle: Science advances one funeral at a time. Linguistics will finally be able to evolve past the rut its been in, and we now have real technical tools to do the kind of testable, reproducible, quantitative analysis at scale. Weāre going to see more change in what we understand about language over the next five years than weāve learned in the previous fifty. We didnāt have anything other than baby humans prior to now to study the properties of language acquisition. Language acquisition in humans is now a subset of the domain because we can actually talk about and study language acquisition outside of the context of humans. In a few more years, linguistics wonāt look at-all like it did 4 years ago. If departments donāt adapt to this new paradigm, theyāll become like all those now laughable geography departments that didnāt adapt to the satellite revolution of the 1970s. Funny little backwaters of outdated modes of thinking the world has passed by. LLMās for the study of language acquisition is like the invention of the microscope, and Chomsky completely missed the boat because it wasnāt his boat.
Your conclusion (which I assume is implied, since you didnāt bother to write it anywhere) might be something like,
Mathematical models built on enormous data sets do a good job of simulating human conversations (LLMs pass the Turing test)ā¦ THEREFORE, homo sapiens lack an innate capacity for language (i.e., the UG Hypothesis is fundamentally mistaken).
My issue is that I just donāt see how to draw this conclusion from your premises. If you were to reformulate your premises into a valid argument structure, we can discuss them and find some common ground.
You havenāt demonstrated that you have any real comprehension of the domain, or that you bring anything interesting enough to this conversation to warrant furtherance.
Harsh words for someone who canāt even state a valid argument. I mean do you expect me to guess how your conclusion comes from your unrelated premises?
Roses are red.
Violets are blue.
An LLM passed the Turing test.
Therefore, humans lack an innate language capacity.
This comment is global-warming-denialism levels of stupid. Iām honestly shocked.
LLMās have no such implications for the field of linguistics. Theyāre barely relevant at all.
Do I really need to point out that human beings do not learn language the way LLMs ālearnā language? That human beings do not use language the way LLMās use language? Or that human beings are not mathematical models. Not even approximately. I fucking hate this timeline.
Thank you for saying it. That really was a depressingly incurious comment.
Chomsky,Ā Ian Roberts and Jeffrey Watumull on the topic: https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html
Iām not saying this from a defending LLMs point, I genuinely think they are a waste of so many things in this world and I canāt wait for this hype cycle to be over.
However, there is a lot of research and backing behind statistical learning in language acquisition, this is specifically the research subject of my friends. Itās a very big thing in intervention for delays in language.
It is opposed to Chomskyās innate language theory, which at this point I think almost any linguist or language/speech sciences researcher would tell you isnāt a well accepted theory (at least as a holistic explanation, certainly it could still be true to an extent and a part of other systems).
tl;Dr LLMs are stupid, but itās not broadly true that the way they ālearn languageā is entirely different from how humans do. The real difference is that they fail to actually learn anything even when imitating humans.
Be that as it may, we arenāt getting any answers from LLMās. And given that Universal Grammar was the dominant view for so long, the jury is still out on a viable alternative.
Hereās one relevant discussion.
Hereās another.
deleted by creator
This would be true if chomskys claim was that he was simply studying human language acquisition and that machines are different, but his claim was that machines canāt learn human languages because they donāt have some intuitive innate grammar.
Saying an llm hasnāt learned language becomes harder and harder the more you talk to it and the more it starts walking like a duck and quacking like a duck. To make that claim youāll need some evidence to counter the demonstrable understanding the llm displays. Chomsky in his nytimes response just gives his own unprovable theories on innate grammar and some examples of questions llms ācanāt answerā but if you actually ask any modern llm they answer them fine.
You can define ālearningā and āunderstandingā in a way that excludes llms but youāll end up relying upon unprovable abstract theories until you can come up with an example of a question/prompt that any human would answer correctly and llms wonāt to demonstrate that difference. I have yet to see any such examples. Thereās plenty of evidence of them hallucinating when they reach the edge of their understanding, but that is something humans do as well.
Chomsky is still a very important figure and his work on politics with manufacturing consent is just as relevant as when it was written over 20 years ago. His work on language though is on shaky grounds and llms have made it even shakier.
Do you really think Chomskyās UG hypothesis from half a century ago was formulated to deny that some dumb mathematical model would be able to simulate human speech?
Almost nothing youāve written has any grounding in empirical reality. You have a sentence that reads something like āthe more you talk to LLMās the harder it is to deny that they can use and understand language.ā
You might as well say that the longer you stare at a printed painting, the harder it is to deny that printers make art. LLMās do not āunderstandā their outputs or their inputs. If we feed them nonsense, they output nonsense. Thereās no underlying semantics whatsoever. An LLM is a mathematical model.
I know it looks like magic, but itās not actually magic. And even if it were, it would have nothing to do with linguistics, which is concerned with how humans, not computers, understand and manipulate language. This whole ridiculous conversation is a non-sequitur.
Imagine a man in a room
You are like, science denial-ism level of ignorance when it comes to this conversation, or perhaps, it might be that you donāt actually understand the underlying philosophy of scientific inquiry to understand why LLMs basically were able to break the back of both UH and innate acquisition.
You seem like the kind of person who cheer-leads āin the spirit of scienceā but doesnāt actually engage in it as a philosophical enterprise. You didnāt seem to notice the key point that @[email protected] made, which is that unlike UG, LLMās are actually testable. Thatās the whole thing right there, and if you donāt get the difference, thatās fine, but it speaks to your level of understanding to how one actually goes about conducting scientific inquiry.
And if you want to talk about incurious:
Specifically āThereās no underlying semantics whatsoeverā is the key lynch pin that UG demands that LLMās demonstrate are not strictly necessary. Its exactly why Chomskys house of cards crumbles with the the counter-factual to UG/ Innate acquisition that LLMās offer. I had a chance to ask him this question directly about 6 months prior to that op-ed being published. And he gave a response thatās about as incurious about why LLM, and basically, big complex networks in general, are able to learn as youāve offered here. His was response was basically the same regurgitation on UG and innate acquisition that he offers in the op-ed. And the key point is that yes, LLMās are just a big bucket of linear algebra; but they represent an actually testable instrument for learning how a language might be learned. This is the most striking part of Chomskys response and I found it particularly galling.
And it is interesting that yes, if you feed
LLMāstransformers (Iām going to start using the right term here: transformers) unstructured garbage, you get unstructured garbage out. However, if there is something there to learn, they seem to be at least some what effective at finding it. But that occurs in non-language based systems as well, including image transformers, transformers being used to predict series data like temperature or stock prices, even even DNA and RNA sequences. Weāre probably going to be having transformers capable of translating animal vocalizations like whale and dolphin songs. If you have structured series data, it seems like transformers are effective at learning patterns and generating coherent responses.Hereās the thing. Chomsky UG represented a monolith in the world of language, language acquisition and learning, and frankly, was an actual barrier to progress in the entire domain, because we now have a counter factual where learning occurs and neither UG or innate acquisition are necessary or at all relevant. Its a complete collapse of the ideas, but its about as close as weāll get because at least in one case of language acquisition, they are completely irrelevant.
And honestly, if you canāt handle criticism of ideas in the sciences, you donāt belong in the domain. Breaking other peoples ideas is fundamental the process, and its problematic when people assume you need some alternative in place to break someone elses work.
You know what, Iām going to be patient. Letās syllogize your argument so everyone can get on the same page, shall we.
This argument is not valid, because itās missing at least one premise. Once you come up with a valid argument, we can debate its premises. Until then, I canāt actually respond, because you havenāt said anything substantive.
The mainstream opinion in linguistics is that LLMās are mostly irrelevant. If you believe otherwise ā for instance, that LLMās can offer insight into some abstract UG hypothesis about developmental neurobiology ā explain why, and maybe publish your theory for peer review.
You donāt need to put project a false argument onto what I was saying.
Chomskyās basic arguments:
1: UG requires understanding the semantic roles of words and phrases to map syntactic structures onto semantic structures.
2: UG posits certain principles of grammar are universal, and that syntactic and semantic representation is required as meaning changes with structure. The result is semantic universals - basic meanings that appear across all languages.
3: Semantic bootstrapping is then invoked to explain where children using their understanding of semantic categorizes to learning syntactic structures of language.
LLMās torpedo all of this as totally unnecessary as fundamental to language acquisition, because they offer at least one example where none of the above need to be invoked. LLMās have no innate understanding of language; its just pattern recognition and association. In UG semantics is intrinsically linked to syntactic structure. In this way, semantics are learned indirectly through exposure, rather than through an innate framework. LLMās show that a UG and all of its complexity is totally unnecessary in at least one case of demonstrated language acquisition. Thatās huge. Its beyond huge. It gives us a testable, falsifiable path forwards that UG didnāt.
Largely, because Chomsky. To invoke Planckās principle: Science advances one funeral at a time. Linguistics will finally be able to evolve past the rut its been in, and we now have real technical tools to do the kind of testable, reproducible, quantitative analysis at scale. Weāre going to see more change in what we understand about language over the next five years than weāve learned in the previous fifty. We didnāt have anything other than baby humans prior to now to study the properties of language acquisition. Language acquisition in humans is now a subset of the domain because we can actually talk about and study language acquisition outside of the context of humans. In a few more years, linguistics wonāt look at-all like it did 4 years ago. If departments donāt adapt to this new paradigm, theyāll become like all those now laughable geography departments that didnāt adapt to the satellite revolution of the 1970s. Funny little backwaters of outdated modes of thinking the world has passed by. LLMās for the study of language acquisition is like the invention of the microscope, and Chomsky completely missed the boat because it wasnāt his boat.
Your conclusion (which I assume is implied, since you didnāt bother to write it anywhere) might be something like,
My issue is that I just donāt see how to draw this conclusion from your premises. If you were to reformulate your premises into a valid argument structure, we can discuss them and find some common ground.
You havenāt demonstrated that you have any real comprehension of the domain, or that you bring anything interesting enough to this conversation to warrant furtherance.
Harsh words for someone who canāt even state a valid argument. I mean do you expect me to guess how your conclusion comes from your unrelated premises?
Removed by mod