Translation engine thinks more and more like a human being
Google Translate has been the undisputed source for anyone who wants to quickly translate a text for years. Nevertheless, competitors of the translation service are coming up with increasingly improved translation technologies.
A conference with Google, Amazon and Facebook as the main sponsors indicated translation should be about technology, apps and the latest smartphones, shouldn’t it? Nothing could be further from the truth. The three technology giants – with a combined market value of a sloppy $1.8 trillion – linked their name to a conference on machine translations last year. Just like Apple, Baidu in China, and many smaller companies, they invest large sums in technologies that ‘automatically’ translate texts. Their goal: to perfect the technique in such a way that computer translations can no longer be distinguished from human translations. That ideal is getting closer and closer.
Original article in Reformatorisch Dagblad
Machine Translation
Few consumers have never used Google Translate. The translation service has been in existence for more than ten years and translates more than 143 billion words into around 100 languages every day. Until last year, this was done by means of so-called statistical machine translations (SMT). In short this technology analyzes source texts and translations, after which an overview is made of each word and the correct translation. A large number of probability calculations and higher mathematics then produce a literal translation of the source text. This not only generates hilarious results, but often also leads to mistakes when a translation is translated back. For example, translate the English ‘Translation Matters’ (‘Translation is important’) to Japanese (‘Matters’’) and back again, and the translation machine comes up with a completely different result: ‘The problem of translation’.
Neural networks
However, with the advent of new technologies, the days of literal machine translations seem to be numbered. Last year, Google introduced a new version of the translation service, based on machine learning. This involves the use of so-called ‘neural networks’, which actually imitate the human brain and continuously learn new things. Unlike statistical machine translations, there is no need to upload a new translation database every now and then, but results can be improved almost instantly.
Nevertheless, the translation quality of the new version of Google Translate is outstripping that of its predecessor. Launched last year, DeepL, a German translation engine, produces significantly better results in many cases, helped by its own artificial intelligence.
However, DeepL’s results also show that machine translations for certain languages are a step too far. Some language combinations are easier to capture in complicated calculations than others. Moreover, there are not always enough data available to train a translation machine adequately.
BLUE score
Nevertheless, with the emergence of neural networks, translation machines can achieve better output with less input. The quality of that output is measured by the BLUE score, which automatically compares translated texts with those of a human translator. The idea behind BLUE is very simple: the closer an automatic translation gets to a professional human translation, the better it is. That is why the companies behind translation services are striving for the highest possible BLUE score. However, the BLUE score is not the end of all contradictions. After all, a high BLUE score for one language combination says nothing about the quality of another language combination. Moreover, language is not an exact science: there are usually many good ways to translate the same text, so even a perfect translation can have a poorer BLUE score.
Adaptive machine translation
Nevertheless, neural networks seem to be shaping the future of translation services. Not only are the translations getting better and better, but thanks to the neural networks, translation machines can also learn quickly. Last year, the American company Lilt was the first to introduce ‘adaptive machine translations’, a technology that learns from machine translation corrections and applies them directly to subsequent translations. An additional advantage is that the translation program constantly learns more about language and the style of the user, so that translations also become more human.
Billions market
Returning to Google, Amazon and Apple, why were these companies queuing up to sponsor a conference on machine translation? This is not least due to the huge potential of the machine translation market. The global translation market is expected to be worth nearly $47.5 billion in 2021. The companies behind the translation machines would like to have a bit of a say in this. In addition, companies such as Facebook can retain their users by automatically translating messages into their native language. With their knowledge of translations, they can also save on the costs they now have to incur to translate their platforms – something that costs millions of euros every year. And, in the end it also applies here: knowledge is power. Providing the best translation service attracts users and gathers knowledge and skills that can be converted into sound currency. But the rapid rise of DeepL shows that the next blow might come from unexpected quarters.
This article was published in Reformatorisch Dagblad on June 20, 2018
Nancy Hall
Very far sighted post, thanx for sharing your ideas.
🙂
Harvey Utech
So where does that leave today’s free-lance translator? As the article suggests, it is just a matter of time before our skills are completely unnecessary. We will be in the same unemployment line with former truck drivers, retail clerks, and bank tellers. How quickly will we be displaced? Will some niches remain for us? Is anyone in the industry talking about this?
Pieter Beens
Great question Harvey, which I believe is answered implicitly in the industry. Give me some weeks and I will publish about it.
Tom Hoar
Harvey, I’m preparing to publish results of comparing personalized SMT engines with Slate Desktop to Google’s so-called “advanced NMT” engines. A personalized engine is one that was trained with only one translator’s personal work and therefore delivers personalized results matching that translator. The evaluation sets are statistically significant representative samples of the translator’s lifetime work. Therefore, the evaluation results represent what that translator will experience in future work in the same genre/domain. The results are eye-opening. So much so, that I might even spend some money to test against DeepL. In short, freelancers who respect and use their own work (TMs) and use them for their own jobs have nothing to fear from any of these “advances.” Stay tuned.
Sofi Linde
Hi Tom, interesting results there! Do you have any link to read further on the subject and see results? Many thanks in advance!
Pingback:Five trends that defined the translation industry in 2018 - Vertaalt.nu