Reaching the Next Billion Users with Global Content

08/04/2019
9 minute read

TAUS Summits continue to look into the future of the global content, this time in New York. Here is your briefing on what leading players of the translation and localization industry have to say on how to reach the next billion users with global content.

Language is both the barrier and the enabler in delivering a service to our global customers. That being said, it seems logical to think that the local content is in the center of any business’ global expansion and growth. But has this perception toward localization, its value, and place in the ecosystem advanced in the whole supply chain? Are we ready to say Farewell Localization, and welcome the age of AI and global content where the translation pipeline is invisible, autonomous and data-driven? We search for the answers at the TAUS Global Content Summits.

After the successful first gathering in Amsterdam (full report here), TAUS continued its Global Content Tour 2019 and brought the global content industry discussions to New York on March 21. We were interested to hear how New York’s thriving language industry perceives and deals with the shifts in technology, workflows and the changing role of people in the process. So, we asked the locals.

Microchanges and Megashifts

Jack Welde, CEO of the New York-based Smartling shared with us interesting perspectives on the present and the future of language services. In his view, what we can expect are microchanges and megashifts.

The user experience has changed, says Jack, it has become more transactional. Users want to click and swipe, and content is the essence of those actions and transactions. On the micro level, we see increased adoption of cloud-based tools and expectations of interoperability and connectivity, in order to serve the needs of this new customer. On the macro level, businesses are reaching a point where they can ask themselves: How do we find the next billion users? And that probably won’t be in the usual places, but rather in markets like China, India, and Africa, argues Jack.

Automation plays a major role today and will continue to do so in the future. Today, each second 32 decisions need to be made about how the content will be generated, managed and presented to the end user. The focus is shifting to management by exception, and instead of localization strategies, we now need digital transformation strategies.

Referring to the TAUS theme of the year, Fixing the Translation Ecosystem, Jack concludes that the ecosystem is not necessarily broken, but it is in the middle of big changes. We will see more human interaction with the content, more dynamic workflows, and a convergence of translation memory and machine translation. The nature of the tools will change, and so will the expectations of the people working with these tools.

Data in Service of Operational Excellence

How far are we today from the dream of frictionless localization, where smart tools, great AI, excited translators, and project managers all work together? Georg Kirchner (Dell-EMC) and Julien Didier (TransPerfect) address this question from a client-provider perspective.

Looking back at the last ten years, not much has changed in terms of tools and perception, believes Georg, translation is still a tedious process. On the other hand, he states that MT has come a long way, we don’t have to apologize for it anymore. Julien agrees with the latter, adding that MT has gone from 0 - 40% of PEMT, and it is finally changing the work in a meaningful way.

Advancements are also visible in the processes and data collection capacity. The use of a TMS is a given now, explains Julien. Instead of focusing on the manual execution, people are now looking at processes and handling exceptions. However, despite the TMS adoption, the portion of project managers in the supply chain still ranges between 15-30%. Georg believes that in order to further improve the process and the systems, one needs a lot of data. Collecting data is key, it needs to be done right from the get-go and kept always on, and that is what they focus on at Dell-EMC.

And how about the people? The depth of the resource pool is the differentiator for the service providers and there is a built-in resistance from the resources about changing the way they work, says Julien. Georg thinks that we should rely on data in this respect as well. The operational excellence for clients is affected by the fragmented resource pool. Having a single resource pool on industry level and an availability calendar handled by an algorithm would be more efficient, and it would not affect the value of LSPs.

Data-driven Quality Management

Without measuring, you can’t manage. Additionally, if the way you measure is not comparable with how others in the industry do it, it is hard to establish a common benchmark. Katerina Gasova explained how RWS Moravia guides their clients about the advantages of deploying TAUS DQF as a standard model for quality evaluation.

The challenge appears when one tries to find the quality evaluation methodology that is use case-specific and captures data in a format that allows for data-driven quality management. Translation quality evaluation standards exist but they are often not normative or granular enough, or they use different taxonomies and measurements, resulting in incompatible quality data.

TAUS DQF looks at quality in a dynamic and holistic way with regards to what is relevant for a specific use case and intended user experience. It allows for benchmarking thanks to the harmonized DQF-MQM error typology and clearly defined in-production data collection and categorization principles, facilitating data-driven quality management.

The Most Meaningful Data Point

We continued the conversation about measuring translation performance and important metrics in a panel discussion with Dominique Goldstein (Google), Georg Kirchner (Dell-EMC), Julien Didier (TransPerfect) and Katerina Gasova (RWS Moravia), moderated by Jaap van der Meer.

For Google, the most meaningful data points are quality, cost and turnaround time, combined with user feedback, explains Dominique. To be meaningful, data needs to be at scale. Julien adds that edit-distance is a proxy measure that is easy to understand, but it is hard to draw a correlation between edit-distance and productivity. RWS Moravia measures the effort needed to work with MT and correlates that data with sampling data.

Everyone on the panel agreed that it is important to be able to benchmark with the industry, remove bias and subjectivity with data. The challenge is to have the same definitions and enough of your own data to compare, adds Georg.

New Identity for the Future Language Experts

vooruitzicht (1)The data-driven approach does not affect only the business development, but also the development of people behind those businesses. Changes in types of clients and customers, tools and technologies, and increase in demand for long tail languages or differential domain expertise require a radically different approach to how the future talents are trained. Claudia Brauer, translator training consultant, proposed new ways of growing the language workforce of the 21st century.

Claudia believes that translators and interpreters need a new identity that unites both competencies. This new talent force of Transinterpreters or Pemintechs (Pemt+Interpreter+Tech-savvy+Soft-skilled) embraces new tools, is digital, mobile, and technically competent. They are soft-skilled in problem solutions and teamwork, and quickly adapt to change. Pemintechs will be recruited from new pools of bilingual individuals worldwide, predicts Claudia, creating a pipeline of human resources.

Embracing the Global Content Economy

We live in the content economy, and every company is a media company, explained Ivan Smolnikov, co-founder of Smartcat. In this new economy, sales and marketing converge. Today’s customers get acquired through social media and make their purchase decisions based on the content they consume. They expect a connected and agile experience. So, the traditional approaches to translation, manual and disconnected from the source, does not work anymore. Establishing an automated and continuous content update loop is probably the only way to win and retain a customer.

The connected experience can be achieved with the help of big data and AI: connectivity can be boosted with the AI ability to profile content, analyze usage data and choose the right workflow; management can benefit from automatic choice of the right workflow and combining MT with other assets; scalability is ensured with automatic usage data analysis and scale supply on demand; delivery and feedback loops are supported with content auto-generation, sentiment analysis, and chatbots. Finally, payments are solved with fintech and legaltech.

Turning the Direction of the Linguistic Tanker

Almost two years into his new role as captain of one of the world’s largest language service providers, we asked John Fennelly to share with us some of his experiences about turning the direction of this linguistic tanker towards a new world of intelligent machines.

Operations mean exceptions and the speed is paramount, explains John. AI is where we are headed, and that explains why Lionbridge acquired Gengo. Domains that are becoming big for Lionbridge are Life Sciences, Gaming and IT, and technology can be of great help in handling the increasing volumes.

Comparing the content industry with fintech, John concludes that commonalities outweigh the differences, with the exception of the word-based pricing model. Whether it will change to a value-based pricing model depends on the industry. The economy and customers drive the services and the format we deliver them in. The main question is: How do we provide as much work to translators as possible?

Quantum Leap Conversation

In the final part of the summit, we focused on use cases and experiences with MT and AI in the service and technology companies.

How can a company use language technology to evolve into not business as usual services? Tom Shaw shared Capita’s use cases, ranging from social listening, keyword monitoring and sentiment analysis in any language, to providing customer support and international employee screening services. The use of MT, AI and effectively combining human translators with machines helped Capita achieve outstanding results - create better outcomes for 1200 schools in the UK, do over 1 million criminality checks every year and provide screening services to 94 countries, among others.

Jiri Stejskal, CEO of the language services company CETRA, reminded us that even though MT is here to stay today, only ten years ago there was still a lot of distrust and hesitation. Jiri remembers the AMTA Conference in 2008, when there was no communication yet between MT developers and translators. The feeling has changed since - MT has won respect, says Jiri, and translators do PEMT, despite the initial dislike.

Can we further optimize content lifecycle with Machine Learning? Yes! Alex Yanishevsky (Welocalize) presented exciting examples of practical application of ML in areas of: source suitability by establishing readability score and determining complexity, sustainable MTPE pricing and evaluation by correlating TM and MT matches, target suitability through identifying likely anomalies and errors, targeted LQA with smart resourcing and MT retraining. These steps help inform decisions for optimizing current processes and use of resources, and forecasting future production activity.

JP Barraza showed us how the MT technology provider SYSTRAN helps companies achieve their business objectives with specialized PNMT™. With 560 NMT models to choose from, a core engine that outperforms any other Systran engine, infinite training processes and a one-day timeframe for engine specialization based on a specific use case, the use cases are many: real-time MT for investigation workflow, e-discovery, gaming - in-game translation with in-line code.

The use cases clearly show value, but getting started with MT and ML still feels like uncharted territory for many. Panelists suggested the following considerations:

  • Do the discovery and come up with an implementation program
  • Manage expectations - don’t expect immediate results
  • Build an MT ecosystem - prepare your data and content, and involve a third party if needed
  • Measure and run experiments to determine when the output is good enough
  • Involve translators
  • Think about the potential of raw MT

Join the Conversation

Are you passionate about these topics? Join the conversations and help us raise awareness about the global impact of the local content.

For all the photos from this event, check out the TAUS Facebook Page.

Author
milica-panić

Milica is a marketing professional with over 10 years in the field. As TAUS Head of Product Marketing she manages the positioning and commercialization of TAUS data services and products, as well as the development of taus.net. Before joining TAUS in 2017, she worked in various roles at Booking.com, including localization management, project management, and content marketing. Milica holds two MAs in Dutch Language and Literature, from the University of Belgrade and Leiden University. She is passionate about continuously inventing new ways to teach languages.

Related Articles
21/11/2024
Celebrating the 20th anniversary of TAUS this month caused the team to look back at the predictions and outcomes so far. What have we achieved? What went wrong?
27/11/2023
Explore the fascinating journey of Lisa Vasileva, a Machine Learning Engineer at TAUS, as she transitions from a professional translator to the field of Natural Language Processing (NLP).
01/10/2021
The factors that impact the reconfiguration of the translation industry in the 2020s and emerging pricing and licensing models: The Owned, Public, Private, Hosted and Shared.