Linguistics Colloquium: Lisa Bylinina

07/06/2022 - 15:30 - 14:00Add To Calendar 2022-06-07 14:00:00 2022-06-07 15:30:00 Linguistics Colloquium: Lisa Bylinina Lisa Bylinina Title: Polarity-sensitivity: Experiments with neural models of language Abstract: Negative polarity items (NPIs, e.g. English 'any') are licensed only in negative contexts.  The notion of 'negative context' can, in principle, be defined on different levels: lexically, syntactically or in terms of semantics. The correct notion can be identified by exploring the causal structure of speakers' implicit linguistic knowledge. Moreover, one can ask whether the correct notion is inferrable directly from just textual data. To find out, we can look at linguistic representations developed by neural language models trained on exclusively textual data, and compare them to linguistic representations in human speakers, using behavioural tests that allow such comparisons. I will report some experiments doing exactly that. One result I will discuss is that language models and humans are surprisingly similar when it comes to polarity phenomena, both showing effects not directly predicted by linguistic theory. Establishing this allows us to use language models to discover new insights into natural language grammar beyond existing linguistic theories. This work contributes to establishing closer ties between psycholinguistic experiments and experiments with language models in the linguistics research cycle.   Subscribe to our Telegram channel to get notified about upcoming talks and events אוניברסיטת בר-אילן internet.team@biu.ac.il Asia/Jerusalem public

Lisa Bylinina

Title:

Polarity-sensitivity: Experiments with neural models of language

Abstract:
Negative polarity items (NPIs, e.g. English 'any') are licensed only in negative contexts.  The notion of 'negative context' can, in principle, be defined on different levels: lexically, syntactically or in terms of semantics. The correct notion can be identified by exploring the causal structure of speakers' implicit linguistic knowledge. Moreover, one can ask whether the correct notion is inferrable directly from just textual data. To find out, we can look at linguistic representations developed by neural language models trained on exclusively textual data, and compare them to linguistic representations in human speakers, using behavioural tests that allow such comparisons. I will report some experiments doing exactly that. One result I will discuss is that language models and humans are surprisingly similar when it comes to polarity phenomena, both showing effects not directly predicted by linguistic theory. Establishing this allows us to use language models to discover new insights into natural language grammar beyond existing linguistic theories. This work contributes to establishing closer ties between psycholinguistic experiments and experiments with language models in the linguistics research cycle.

 

Subscribe to our Telegram channel to get notified about upcoming talks and events

Last Updated Date : 19/05/2022