The technology available to support localization processes—as well as the larger business strategy—has advanced rapidly in the last several years. Advancements in artificial intelligence (AI) and machine learning (ML) have made way for lower costs, faster turnaround times, and higher translation quality, freeing up the budgets organizations need to explore new markets.
2017 saw a particularly meteoric rise of AI and ML tools generating billions of dollars in revenue. Few industry verticals are now resistant to ML’s force. But relative to its decades-long history, the localization industry has only just gotten started. Here are four machine learning advancements we’re most excited to see emerge this year.
Though a few years old, neural machine translation still provides first-mover advantage. As companies seek more efficient ways to deliver more content in more languages than ever, NMT is moving from niche applications reserved for large, global enterprises into the mainstream.
Using the power of deep learning and a higher volume of training data to build an artificial neural network, NMT can work from patterns, such as contextual clues around the source sentence, that help speed and improve translations without human intervention. In fact, from datasets too large to analyze manually, NMT can identify complicated patterns in ways beyond human ability to recognize.
NMT doesn’t spell the end of human translation, but it does turn a corner in post-editing. Post-editors can now devote more time to previously marginalized aspects of the translation process like output quality, brand standards, and much-needed creative scope to adapt content seamlessly to various target audiences.
How do you deliver language quality in this world of increasing content and touchpoints? On the one hand, content growth outpaces human ability, but on the other, there’s no time to waste. One way to overcome this challenge is by automating predictable language quality checks.
Automated language QA is a collaborative and powerful quality control tool used to maximize productivity, scalability, and quality at the lowest cost. Automated QA engines use pattern recognition and other language technology approaches to identify potential problems, such as broken or missing links, inconsistent terminology, and missing content, helping linguists identify and fix problems as early as possible. Like NMT, this technology can detect more errors than human review alone.
Then, there’s automated interpretation. Google’s Pixel Buds, wireless earbuds designed to translate audio in real-time, have taken us even closer to marrying machine translation and text-to-speech-technology with high-quality results. And this is only one area that helps meet demand for the continuous delivery now so central to localization strategies.
ML can further augment content management systems by mapping projects to the best linguist for each job. Using linguistic big data, ML can identify (objectively) who has more experience with certain types of content and who translated them better.
These ML algorithms can be used in all steps of the full quality cycle. Starting with QA, ML can not only identify the right human resource, but also the right linguistic resources (translation style guides and glossaries, MT customization, and so forth) and alert translators to areas with potential translatability issues (like ambiguity and complexity). In this way, everything can be prepared from the beginning to assure the best possible quality. ML-powered quality control components can also help fix translation errors, readability issues, or level of linguistic register between source and translated content.
ML isn’t just for translations. It can also help localization professionals sell their ideas to the executive team, by forecasting the outcome of a project before they commit to it. Technology exists today that allows us to feed dozens of data points about a piece of content into a system and, comparing these data points with all past projects, uncover appropriate workflows and translators, potential problems, and how much the project will cost.
As many organizations know, the returns on localization investment can be difficult to quantify. Yet it’s essential for decision makers to understand the value of a localization strategy. Using predictive analytics—as well as non-financial ROI metrics like customer satisfaction, customer retention, and brand analytics—localization teams can more easily get approval for projects that later provide the hard data.
Today, ML is key to getting a linguistic workflow off to a good start and optimizing every step thereafter. And industry insiders are confident that, within the next five years, this workflow will be one driven by big data. The future belongs to businesses that harness cutting-edge technologies like AI, ML, NMT, and predictive analytics to make data-backed business decisions and more accurate profiles of customers.
At the same time, massive amounts of data are needed to train the technology to work. Organizations hoping to keep pace with natural, accurate, relevant, and intuitive solutions face significant challenges without enlisting the third-party capabilities needed to overcome them. Language service providers come with the linguistic operations needed to generate high-quality global data, combining technology with global user experience testing to continually improve the human experience. Even more, they come with deep roots in local culture.
Developing the right localization partnership and go-to-market strategy will have a remarkable impact on business’ growth potential in 2018. Not sure where yours should start? Get tips on driving localization value in our whitepaper.