MS Tez Sunumu: “Measuring and Improving Interpretability of Word Embeddings Using Lexical Resources,” Lütfi Kerem Şenel (EE), UMRAM, 13:00 6 Ağustos (EN)

SEMINAR: “Measuring and Improving Interpretability of Word Embeddings Using Lexical Resources”
Lütfi Kerem Şenel
M.S. in Electrical and Electronics Engineering
Doç. Dr. Tolga Çukur

The seminar will be on Tuesday, August 6, 2019 at 13:00 @UMRAM

As an ubiquitous method in natural language processing, word embeddings are extensively employed to map semantic properties of words into a dense vector representations. They have become increasingly popular due to their state-of-the-art performances in many natural language processing (NLP) tasks. Word embeddings are substantially successful in capturing semantic relations among words, so a meaningful semantic structure must be present in the respective vector spaces.

However, in many cases, this semantic structure is broadly and heterogeneously distributed across the embedding dimensions. In other words, vectors corresponding to the words are only meaningful relative to each other. Neither the vector nor its dimensions have any absolute meaning, making interpretation of dimensions a big challenge. We propose a statistical method to uncover the underlying latent semantic structure in the dense word embeddings. To perform our analysis, we introduce a new dataset (SEMCAT) that contains more than 6,500 words semantically grouped under 110 categories. We further propose a method to quantify the interpretability of the word embeddings that is a practical alternative to the classical word intrusion test that requires human intervention. Moreover, in order to improve the interpretability of word embeddings while leaving the original semantic learning mechanism mostly unaffected, we introduce an additive modification to the objective function of the embedding learning algorithm, GloVe, that encourages the vectors of words that are semantically related to a predefined concept to take larger values along a specified dimension. We use Roget’s Thesaurus to extract concept groups and align the words in these concept groups with embedding dimensions using modified objective function. By performing detailed evaluations, we show that proposed method improves interpretability drastically while preserving the semantic structure. We also demonstrate that imparting method with suitable concept groups can be used to significantly improve performance on benchmark tests and to measure and reduce gender bias present in the word embeddings.