Ivan Vulić is a Principal Research Associate (equivalent to Associate Professor) and a Royal Society University Research Fellow in the Language Technology Lab, University of Cambridge. He is also a Senior Scientist at PolyAI. He is a member of the Steering Committee of the newly established Centre for Human Inspired Artificial Intelligence (CHIA) at Cambridge.
Ivan holds a PhD in Computer Science from KU Leuven awarded summa cum laude. In 2021 he was awarded the annual Karen Spärck Jones Award from the British Computing Society for his research contributions to Natural Language Processing and Information Retrieval.
His core expertise is in representation learning, cross-lingual learning, conversational AI, human language understanding, distributional, lexical, and multi-modal semantics in monolingual and multilingual contexts, transfer learning for enabling cross-lingual NLP applications such as conversational AI in low-resource languages, and machine learning for (cross-lingual and multilingual) NLP. He has published numerous papers at top-tier NLP and IR conferences and journals, and his research work also resulted in several best paper awards. He serves as an area chair and regularly reviews for all major NLP and Machine Learning conferences and journals. Ivan has given numerous invited talks at academia and industry, and co-organised a number of NLP conferences and workshops.
An Incredibly Short Introduction to Modular Deep Learning
Following the rapid increase in size of deep learning models, there is a pressing and crucial need for parameter-efficient and modular learning strategies that can simultaneously deal with 1) such large model sizes and 2) the lack of annotated data for many tasks, modalities, domains, and languages, and which 3) enable the creation of composable and reusable model components. In this talk, I will aim to demonstrate that modularity enables widening the reach of modern deep learning, also boosting efficiency and reusability of models’ constituent components: modules. I will provide a (too) brief but (hopefully) systematic overview of a range of recent modular and parameter-efficient techniques, additionally pointing to their high-level similarities and differences, and also cover some representative applications in Natural Language Processing and other related application areas.