Title |
Testing pre-trained transformer models for Lithuanian news clustering / |
Authors |
Stankevičius, Lukas ; Lukoševičius, Mantas |
Full Text |
|
Is Part of |
CEUR workshop proceedings: IVUS 2020: Information society and university studies 2020: proceedings of the information society and university studies 2020, Kaunas, Lithuania, April 23, 2020 / edited by: A. Lopata, V. Sukackė, T. Krilavičius, I. Veitaitė, M. Woźniak.. Aachen : CEUR-WS. 2020, vol. 2698, p. 46-53.. ISSN 1613-0073 |
Keywords [eng] |
Document clustering ; document embedding ; Lithuanian news articles ; Transformer model ; BERT ; XLM-R ; multilingual |
Abstract [eng] |
TArecent introduction of Transformer deep learning architecture made breakthroughs in various natural language processing tasks. However, non-English languages could not leverage such new opportunities with the English text pre-trained models. This changed with research focusing on multilingual models, where less-spoken languages are the main beneficiaries. We compare pre-trained multilingual BERT, XLM-R, and older learned text representation methods as encodings for the task of Lithuanian news clustering. Our results indicate that publicly available pre-trained multilingual Transformer models can be fine-tuned to surpass word vectors but still score much lower than specially trained doc2vec embeddings. |
Published |
Aachen : CEUR-WS |
Type |
Conference paper |
Language |
English |
Publication date |
2020 |
CC license |
|