Mining and Learning from Multilingual Text Collections using Topic Models and Word Embeddings - TEL - Thèses en ligne Accéder directement au contenu
Thèse Année : 2017

Mining and Learning from Multilingual Text Collections using Topic Models and Word Embeddings

Explorer et Apprendre à partir de collections de textes multilingues à l'aide des modèles probabilistes latents et des réseaux profonds

Résumé

Text is one of the most pervasive and persistent sources of information. Content analysis of text in its broad sense refers to methods for studying and retrieving information from documents. Nowadays, with the ever increasing amount of text becoming available online in several languages and writing styles, content analysis of text is of tremendous importance as it enables a variety of applications. To this end, unsupervised representation learning methods like topic models and word em- beddings constitute prominent tools. The goal of this thesis is to study and address challenging problems in this area, focusing on both the design of novel text mining algorithms and tools, as well as on studying how these tools can be applied to text collections written in a single or several languages. In the first part of the thesis we focus on topic models and more precisely on how to incorporate prior information of text structure to them. Topic models are built on the premise of bag-of-words, and therefore words are exchangeable. While this assumption benefits the calculations of the conditional probabilities it results in loss of information. To overcome this limitation we propose two mechanisms that extend topic models by integrating knowledge of text structure to them. To this end, we begin by assuming that the documents are partitioned in thematically coherent text segments. Then, the first mechanism assigns the same topic to the words of a segment. The second, capitalizes on the properties of copulas, a tool mainly used in the fields of economics and risk management that is used to model the joint probability distributions of random variables while having access only to their marginals. Through the use of copulas we propose flexible topic models that can model different degrees of dependence between the topics of a segment. The second part of the thesis explores bilingual topic models for comparable corpora with explicit document alignments. Typically, a document collection for such models is in the form of comparable documentpairs. The documents of a pair are written in different languages and are thematically similar. Unless translations, the documents of a pair are similar to some extent only. Meanwhile, representative topic models assume that the documents have identical topic distributions, which is a strong and limiting assumption. To overcome the limitations of this assumption we propose novel bilingual topic models that incorporate the notion of cross-lingual similarity of the documents that constitute the pairs in their generative and inference processes. Calculating this cross-lingual document similarity is a task on itself, which we propose to address using cross-lingual word embeddings. The last part of the thesis concerns the use of word embeddings and neural networks for three text mining applications. First, we discuss polylingual document classification where we argue that translations of a document can be used to enrich its representation. Using an autoencoder to obtain these robust document representations we demonstrate improvements in the task of multi-class document classification. Second, we explore multi-task sentiment classification of tweets arguing that jointly training classification systems on correlated tasks can improve the obtained performance. To this end we show how one can achieve state-of-the-art performance on a sentiment classification task using recurrent neural networks. The third application we explore is cross-lingual information retrieval. Given a document written in one language, the task consists in retrieving the most similar documents from a pool of documents written in another language. In this line of research, we demonstrate how adapting the transportation problem for estimating document distances one can achieve important improvements.
Dans cette thèse, nous nous intéressons à l'apprentissage de représentations textuelles basé sur l'hypothèse distributionnelle stipulant que les éléments linguistiques qui co-occurrent dans le même contexte avec la même fréquence sont similaires. Dans la première partie de la thèse, nous considérons les modèles latents probabilistes pour les corpus de textes monolingues et bilingues. Nous identifions certaines limitations de ces modèles, par exemple le fait qu'ils ne tiennent pas compte de la structure du texte, et nous proposons des solutions pour les prendre en compte. La deuxième partie de la thèse concerne les embeddings de mots, c'est-à-dire les représentations de mots continus apprises avec des réseaux profonds. Nous étudions différents paramètres de classification de textes et des problèmes de récupération de documents. Nous proposons des algorithmes qui bénéficient de l'expressivité des embeddings de mots, soit en utilisant les réseaux neuronaux profonds, soit une reformulation du problème par le transport optimal.
Fichier principal
Vignette du fichier
thesis_balikas_final.pdf (2.96 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

tel-01706347 , version 1 (11-02-2018)

Identifiants

  • HAL Id : tel-01706347 , version 1

Citer

Georgios Balikas. Mining and Learning from Multilingual Text Collections using Topic Models and Word Embeddings. Artificial Intelligence [cs.AI]. Grenoble 1 UGA - Université Grenoble Alpes, 2017. English. ⟨NNT : ⟩. ⟨tel-01706347⟩
204 Consultations
592 Téléchargements

Partager

Gmail Facebook X LinkedIn More