Human evaluation of three machine translation systems: from quality to attitudes by professional translators

Authors

  • Anna Fernández Torné Universitat Autònoma de Barcelona
  • Anna Matamala Universitat Autònoma de Barcelona

DOI:

https://doi.org/10.35869/vial.v0i18.3366

Keywords:

machine translation, quality evaluation, human evaluation, automatic metrics, post-editing effort

Abstract

This article aims to compare three machine translation systems with a focus on human evaluation. The systems under analysis are a domain-adapted statistical machine translation system, a domain-adapted neural machine translation system and a generic machine translation system. The comparison is carried out on translation from Spanish into German with industrial documentation of machine tool components and processes. The focus is on the human evaluation of the machine translation output, specifically on: fluency, adequacy and ranking at the segment level; fluency, adequacy, need for post-editing, ease of post-editing, and mental effort required in post-editing at the document level; productivity (post-editing speed and post-editing effort) and attitudes. Emphasis is placed on human factors in the evaluation process.

Downloads

Download data is not yet available.

Downloads

Published

2021-01-18

Issue

Section

Articles