Key Theme
Responsible Methods
Novel AI methods for language technology, which add transparency and are not restricted to mainstream languages.
Transparency
Deep learning models have revolutionized the field of Natural Language Processing, but their blackbox nature also makes it notoriously difficult to interpret and explain their inner workings. This limitation has become the central obstacle for responsible use of the technology, even as more and more companies and institutions are using it, and the outcomes of these systems are starting to really affect people’s lives. Without interpretability, (i) there is no transparency about the outputs of these systems, (ii) it is difficult to integrate existing expertise and (iii) users cannot easily adapt the systems to their needs. We develop methods that yield insights into existing deep learning models (“posthoc interpretability”) and identify ways to constrain these models such that they are more interpretable without losing their great performance (“explainability by design”).
Beyond mainstream
Two Scotsmen standing in an elevator equipped with voice recognition technology, utterly failing to get to floor eleven (see the this video): a comical yet sad reminder that generic language and speech technology is still heavily biased towards standard languages, if available for a language market at all. If any type of regional or ethno/sociolectal language variation is captured in language and speech technology applications, which is entirely possible with AI-driven NLP and voice technologies and with carefully captured and curated data, this technology will become usable and more inclusive. This Focus Area will be developing technologies such as machine translation and text generation tools; application areas will include (but will not be limited to) variants of spoken and written Dutch.