| Abstract [eng] |
The study focuses on the prevalent issue of machine-translated spam websites which are rapidly proliferating in the search engines, offering information of dubious quality. These websites use machine translation software to create content in multiple languages, potentially leading to misinformation among users. The central problem is the users’ lack of awareness about the machine- translated nature of the content and potential inaccuracies it may contain. Despite the growing abundance of such content, research on its linguistic quality and end-user acceptability is limited. The research aims to evaluate the linguistic quality through error analysis and end-user acceptability of machine-translated spam websites in the Lithuanian language, employing a combination of automated and human assessment tools for machine translation output quality. The study aims to achieve these objectives: 1. to provide a review of the most recent literature on translation quality, the various methods of measuring it, and end-user acceptability. 2. to analyse the acceptability of machine-translated spam websites from the user’s perspective. 3. to measure machine-translated spam websites text using automatic evaluation BLEU score. 4. to evaluate linguistic quality and identify errors present of machine-translated spam websites text following the Multidimensional Quality Metrics (MQM). A comprehensive evaluation of machine-translated websites was conducted using a combination of qualitative, quantitative, and descriptive methods. The study employed three methodological strategies: a user-centered questionnaire to understand acceptability of machine-translated websites, the application of the BLEU score for an automated evaluation of machine translation quality, and human expert analysis using Multidimensional Quality Metrics by Lommel et al. (2014) to classify translation errors. Additionally, the acceptability of the end-user is of equal importance, as they are the intended recipients of the final translation product. Based on the analysis of the data, several conclusions were reached. Translation quality evaluation studies acknowledge the lack of a universal evaluation method due to the subjectivity of the concept and the evolving translation technologies. Thus, incorporating various evaluation methods can lead to more comprehensive and accurate results in assessing translation quality. The evaluations conducted by end-users showed varying levels of acceptability regarding the quality of translation. Certain aspects, like clarity and informativeness, received positive feedback, while others like enjoyability, trustworthiness, and quality were questioned. Quality evaluations also varied across age groups. Moreover, machine translation quality varied across websites, as indicated by BLEU scores. Quality discrepancies could be attributed to text complexity, with better performance in familiar contexts. Yet, the BLEU score is limited in capturing all translation errors as it considers exact word matches only. Human expert evaluation using the MQM framework identified numerous translation errors across various categories, with accuracy, linguistic conventions and terminology having the highest error rate. This underlines the need for thorough post-editing by professional translators to enhance the quality of machine translation. This study integrates diverse methodologies, such as human linguistic analysis, social survey, and automatic language evaluation, to yield more holistic and objective insights. The study also emphasizes that this topic is equally important to both language researchers and end-users of the translation product. As the volume of machine-translated text grows with technological advances, there’s a rising need to educate society about the technologies, their advantages, and risks tied to low- quality or inaccurate content, enabling informed decision-making. Notably, this study highlights the significance of both expert quality assessment and public involvement in evaluating translation quality. |