Home / Kitchen sink  / How the new Google Translate is smarter than we think
4 Comments
  • Terence Lewis

    In our Dutch-English Neural Machine Translation system (www.mydutchpal.com) we have introduced an Intermediate Server which sits between our memoQ and Trados plugins and our NMT server. The Intermediate Server provides a word splitting routine which can split Dutch compound nouns into their constituent parts on the basis of rules and a lexicon. This is clearly not “pure” neural machine translation but an effective mechanism for avoiding a plethora of s. But then, is absolute “purity” an imperative? At the end of the day we all want to give our users a useful translation.

    7 August, 2017 at 14.23 Reply
  • Yifan

    Indeed, for a short sections of text the translation quality degrades sometimes. Google Translate API still provides the old Phrase Based Engine. GT4T includes both. GT4T allows me to send a section of of a sentence to Google and I found Phrase Based results work better for me.

    7 August, 2017 at 16.56 Reply
  • Gert Van Assche

    Great read, as usual, Pieter. Maybe you should also take a look at what FB (https://code.facebook.com/posts/289921871474277/transitioning-entirely-to-neural-machine-translation/) and Amazon (https://aws.amazon.com/blogs/ai/train-neural-machine-translation-models-with-sockeye/) are doing. Google is smart, but the game has just started — Google is no longer the only one setting the standard for ‘smart’. We will see many changes in the future, if not in quality surely in flexibility and usability.
    On a side note: My family tends to argue with me; they feel Google Translate did not improve. I guess it depends on the sentences they used, but they had this experience in ENG DUT and ENG > FRA. As a translation professional, I disagree with them, but as a father and a husband… who am I to question their observations 😉

    9 August, 2017 at 19.13 Reply
Post a Comment