Human evaluation of three machine translation systems: from quality to attitudes by professional translators

  • Anna Fernández Torné Universitat Autònoma de Barcelona
  • Anna Matamala Universitat Autònoma de Barcelona
Keywords: machine translation, quality evaluation, human evaluation, automatic metrics, post-editing effort


This article aims to compare three machine translation systems with a focus on human evaluation. The systems under analysis are a domain-adapted statistical machine translation system, a domain-adapted neural machine translation system and a generic machine translation system. The comparison is carried out on translation from Spanish into German with industrial documentation of machine tool components and processes. The focus is on the human evaluation of the machine translation output, specifically on: fluency, adequacy and ranking at the segment level; fluency, adequacy, need for post-editing, ease of post-editing, and mental effort required in post-editing at the document level; productivity (post-editing speed and post-editing effort) and attitudes. Emphasis is placed on human factors in the evaluation process.


Download data is not yet available.