Robert Frank is coming this week for our Joint CSE/IST Colloquium Series in Natural Language Processing!

We are closing this semester's invited talks with another Joint CSE/IST Colloquium for Fall 2018: Don't miss Robert Frank's rescheduled talk for December 7th!


Inductive Bias in Language Acquisition: Universal Grammar (UG) vs. Deep Learning

Generative approaches to language acquisition emphasize the need for language-specific inductive bias, Universal Grammar (UG), to guide the hypotheses learners make in the face of limited data. In contrast, computational models of language learning, particularly those rooted in contemporary neural network models, have yielded significant advances in the performance of practical NLP systems, largely without the imposition of any such bias.  While UG-based approaches have led to important insights into the stages and processes underlying language acquisition, they have not yielded concrete models of language.  At the same time, existing practical computational models have not been widely tested with respect to their ability to extract linguistically significant generalizations from training data. As a result their ability to face the challenges posed by UG-based approaches remains unproven.  

In this talk, I will review a number of experiments that explore the ability of network models to take on such challenges. Looking at question formation and complementizer-trace effects, we find that (certain) network architectures are capable of learning significant grammatical generalizations through gradient descent learning, suggesting that the architectures themselves impose some of the necessary bias, often assumed to require UG. Inadequacies remain in the generalizations acquired, however, which points to the need for hybrid models that integrate language specific information into network models.


Robert Frank is currently Professor and Chair of Linguistics at Yale University. He received his PhD at the University of Pennsylvania (Computer and Information Science) and has taught at Johns Hopkins University (Cognitive Science) and the University of Delaware (Linguistics). His research explores models of language learning and processing, as well as the role of computationally constrained grammar formalisms in linguistic explanation.

Find more about his work here.

Comments

Popular posts from this blog

Fall 2023 NLP lab party!