Key Theme
Safe Society
Novel methods to detect implicit biases and abusive language.
Implicit biases
Machine learning-based algorithms help us make all kinds of decisions by extracting information from large amounts of text. Such algorithms make use of language models. These language models, however, are sometimes implicitly biased. The recommendations that the algorithms make can therefore also be biased. We investigate what the source is of these biases, how they are represented in language models, and how they can be removed to ensure ethically fair, and legally just, automatic decision making. We also develop methods to automatically detect biases in text and speech itself (rather than in language models).
Abusive language
Abusive, hateful and discriminatory language in online discussion boards and social media creates an unsafe environment for individuals and can contribute to a divided society. We develop machine learning methods to detect, monitor and mitigate such harmful language use to enhance online safety and reduce polarisation in society.