Abstract [eng] |
Even though the popularity of crowdsourcing is seen to be growing in recent years, in terms of web localization there are not so many researches done and thus the need of such investigation is greatly felt. The aim of this work is to research crowdsourcing as a potential tool for localization of websites. To achieve this aim the following objectives have been set out: to set out criteria for evaluation of the selected websites localization quality; to evaluate the quality of the website localized by means of crowdsourcing; to compare the quality of translation in the website localized by employing crowdsourcing with the one translated by professional translators. The paper consists of two parts – theoretical and practical. In the theoretical part such subjects as digital genres and their types has been discussed, also layers of localization and the most important aspects of web localization that should be taken into consideration presented. Moreover, the model for quality evaluation has been provided. It has been clarified that quality in web localization is distinguished into external and internal one. Internal comprises textual, linguistic and pragmatic factors, while internal deals with functionality web usability and compliance with client’s commission. In the practical parts two social networking sites, such as Facebook and Google+ have been analysed and compared. The former site has been translated by means of crowdsourcing and for localization of the latter one professional translation agency was responsible. During the analysis all the focus was devoted to translation of user interface elements and menus on Facebook website and their quality compared with those on Google+ website. Having completed the analysis of a crowdsourced website some serious errors in translation has been found. The localized version of Facebook contained lots of inconsistencies, different level of formality, large pieces of untranslated or partially translated text. Moreover, some serious case usage errors as well as mistranslations, crude shifts in meaning and other impermissible errors have been found. Meanwhile, the site localized by translation agency did not contain such errors and the difference in quality between two sites was greatly felt. The hypothesis raised in the beginning of work that the quality of websites, localized by crowdsourcers, is not as high as of those localized by professional translators has been approved. Finally, it has been noticed that the assessment of an external quality can be done only on a very subjective level, since it is still a relatively new thing and there are simply no sources which would help during such evaluation. |