EEE Seminar: “Natural Language Understanding for Task Descriptions and Beyond”, Hinrich Schütze, 6:30PM December 15 (EN)

Natural Language Understanding for Task Descriptions and Beyond

Speaker: Hinrich Schütze
Ludwig Maximillian University of Munich (LMU Munich), Germany
Date/Time: December 15, 2022 – 18:30 (on Zoom)
***This is an online seminar. To request the event link, please send a message to department.

Abstract:
Task descriptions are ubiquitous in human learning. They are usually accompanied by a few examples, but there is little human learning that is based on examples only. In contrast, the typical learning setup for NLP tasks lacks task descriptions and is supervised with 100s and often many more examples.
I will first give an update on our work on Pattern-Exploiting Training (PET). PET mimics human learning in that it leverages task descriptions in few-shot settings by exploiting the natural language understanding (NLU) capabilities of pre-trained language models (PLMs). I will show that PET is particularly promising in real-world few-shot settings. The second part of the talk examines to what extent current PLMs exhibit true NLU. I will introduce CODA21, a new benchmark that we argue tests for true NLU. Finally, I will review our recent work on neurosymbolic models and their potential for NLU at human levels.

Bio:
Hinrich Schütze is the Chair of Computational Linguistics and co-director of the Center for Language and Information Processing at LMU Munich. He is also a professor (by courtesy) of Computer Science at LMU Munich. Professor Schütze was the President of the Association for Computational Linguistics (ACL) in 2020 and currently is an ELLIS Fellow. He is the co-author of prominent textbooks “Introduction to Information Retrieval” and “Foundations of Statistical Natural Language Processing”.
Ever since starting his PhD in the early 1990s, Professor Schütze’s research interests have been at the interface of linguistics, cognitive science, neural networks and computer science. Recent examples include learning with natural language instructions, multilingual representation learning for low-resource languages, computational morphology, and neurosymbolic approaches.