We rarely think of linguistic quality when done right. Because then it is not necessarily seen as a differentiator or a success factor. It is when the mark gets missed that quality takes (back) the spotlight. This was one of the opening thoughts of the TAUS QE Summit by James Douglas (Microsoft), at Microsoft premises in Redmond.
With a group of around fifty experts for whom quality never leaves the spotlight, we looked at the quality standards that are most likely to catch on and sustain themselves long-term, at the correlation between the metrics used to ensure the quality and customer’s perception of the product quality, and finally, we questioned the future-fit of the LSP model in an attempt to predict the next big step for the language industry and its supply chain.
This report is meant to highlight some new and old quality-related challenges and potential solutions, as raised by the participants of the event. We formed multiple focus groups during the day to draft takeaways and industry-wide action points around the four main topics: business intelligence, user experience, risk and expectation management and DQF Roadmap planning.
Automation and Standards
In recent years, the industry focus was on automation and datafication of workflows. Without the standards and common agreements, however, we can only get so far. The caveat of an automated system is that it is not aware of how the output is trending. TAUS Director Jaap van der Meer emphasized the lack of common ways of measuring the quality output in the industry. One needs to be able to augment the internal processes, while preserving the customer-centric approach to the product quality. With a tool like TAUS DQF, that capability is there, in real time and including the productivity results.
The short-term roadmap for DQF includes developing My DQF Toolbox – a feature that allows users to customize their reporting environment and correlate different data points:
On top of that, the benchmarking feature will be further expanded to allow internal benchmarking by vendor, customer, MT engine, content type, etc, explained Dace Dzeguze, DQF Product Manager at TAUS.
Alan K. Melby tied into the conversation about the common agreements, reminding us of the importance of industry standards and the outstanding work being done by the standards body ASTM International (LTAC Global). Out of the quality metrics that have been most used so far (SAE J2450, LISA QA, MQM and DQF), he only sees two that will continue to grow adoption: TAUS DQF (harmonized with MQM) and DFKI MQM. Now the harmonized DQF-MQM error typology has reached the momentum of gaining a true industry acceptance and has been included in the new ASTM standard WK46396 “New Practice for Development of Translation Quality Metrics”.
Business Intelligence: Data-informed vs Data-driven
The data and dashboards have made their way into the translation industry, now it is a matter of making sure that they are properly interpreted. Arle Lommel (CSA) led the conversation with Mark Lawyer (SDL), Scott Cahoon (Dell) and Patricia Paladini (CA Technologies), to tackle the topic of the shift to business intelligence and process monitoring. He stressed the difference between the metrics and KPIs: a metric tells you something fundamental about the organization and the KPIs tell you if you are meeting the business objectives, and noticed that even with vast amounts of data, very few organizations have effective KPIs. So, how can business intelligence help here? Where can we expect to see the benefits of business intelligence, when will we be able to predict quality ahead of time and what are the best practices around data privacy?
Business intelligence (BI) will reach its full potential in the future, projecting the impact of linguistic quality on revenue, predicted Mark. Scott added that at Dell they see a lot of potential in the DQF model: today we rely a lot on humans to understand the input and output.The future is to build BI into the infrastructure that we have and gain intelligence about the vendors to understand which translations or languages should go to which vendor. At CA Technologies, they are moving away from a traditional model that looks at errors, to looking at improvements instead, emphasized Patricia. They focus their business intelligence efforts on understanding the user experience.
In continuous localization, you can get signals ahead of time, stated Mark. For SDL, real-time quality prediction requires real-time close collaboration and metrics on both sides - client and service provider. At Dell, they approach prediction on the task level with DQF, comparing how translation is going through their system with the set service level agreements. They believe that this unbiased methodology can advance the growth of the industry as a whole. The type of real-time data that they are after at CA Technologies, is understanding if they overtranslate, and where they don’t translate while they should.
When it comes to data sharing and privacy, translators should be motivated to share their data and the system should be looking at the averages, not good or bad days, said Patricia. Buyers need to have a better understanding of the service that they are buying, which means access to more data from LSPs. What we are really buying from LSPs is their ability to find, manage and maintain a strong pool of translators, added Scott. It might be necessary to reevaluate the LSP model, as disintermediation is the direction that the industry is moving into. Vendors should look for other opportunities and services to offer besides trying to line up enough translators to do a certain job, Patricia concluded.
Takeaways:
User Experience: The Ultimate Judge of Quality
In the always-on economy, it is really the user experience, more than anything else, that determines quality. In the opening conversation introduced by Glen Poor (Microsoft), Katka Gasova (Moravia), Vincent Gadani (Microsoft), and Andy Jones (Nikon) shared their experiences around collecting and managing user-generated feedback and determining when the quality meets customer expectations and requirements.
There are multiple aspects that one can look at to assess if target content will be successful: translation quality, quality of the language attributes, verity of the cultural attributes and non-linguistic elements, etc. However, these are all analytical approaches assessing compliance to the language specifications/requirements, rather than capturing individual emotional experience. What we need, explained Katka, is a more holistic approach - target content assessment at the ‘macro level’. The same approach should also be followed when choosing the appropriate linguistic quality programs.
One of the biggest challenges is that evaluation has traditionally happened on the product level and not on the language level. The issues are mostly on the functionality side, so the questions to ask is if there is a degradation in user experience due to translation, and if it introduces any additional friction.
At Microsoft, they’ve measured the language quality with a user survey, using a 5-point symmetrical Likert scale model and carefully crafted survey questions and translations. They collected responses from hundreds of thousands of users that helped them calculate a NLQS – net language quality score (similar to NPS - net promoter score) for 50 languages.
Takeaways:
Risk and Expectation Management
How can the industry players manage the quality and pricing risks and expectation effectively in the era of MT? In this topic, introduced by Scott Cahoon (Dell), Dalibor Frivaldsky (Memsource) and JP Barazza (Systran) shared their experiences with pre- and in-production assessment and evaluation tools and methods.
Scott opened the conversation by explaining the two different models they have at Dell. One is a traditional pre-production scenario with vendors assessing and retraining engines and deciding when the NMT is ready for prime time. The second is in-production evaluation with DQF that involves turning all languages on and running all throughput data through DQF to measure the performance of the engines and see which of the languages are ready now, and which need to be tuned further.
Dalibor linked to that conversation by sharing interesting data that they have at Memsource: almost 35% of translation output is of high quality, thanks to translation memories (TMs). The usefulness of a TM is limited to when same or similar text is translated. Artificial Intelligence (AI) really comes into play with non-translatables. Automatically identifying segments that need no translation is the first patented AI-based feature developed by Memsource and supported in 219 language pairs. In addition to that, to cater for real-time assessment of the MT quality, Memsource has enhanced their TMS with a machine translation quality estimation (MTQE) feature that adds a score to machine output before post-editing.
The focus at Systran is on an infinite, perpetual training model that includes the quality management and measurement, explained JP. It happens as they keep on feeding the new data, with automated systematic testing. Every iteration of a training gets tested while the test sets are curated so that they never get into the training corpora. For quality evaluation, Systran uses a simple and intuitive human review evaluating adequacy of translation against the source sentence, giving it a score on a scale of 0-100% that equals the portion of sentences that were better, equal or worse than human translation. The perfect machine translation is the one where the final translation is exactly the same as the pre-translated content.
Takeaways:
DQF Roadmap Planning and Transcreation
At the QE Summit at Microsoft in Dublin on 11 April 2018, the community assigned TAUS the task to develop a best practice for transcreation. At the QE Summit in Redmond, we’ve consulted with the participants on a first draft of this best practice and opened the floor for other ideas on DQF features to be developed by TAUS.
Transcreation has no agreed upon, teachable guidelines, it is mostly done independently or internally, explained Manuela Furtado (Alpha CRC), and added: measuring quality no longer relies on source and target, but the end-user and likes, clicks, sales, etc. The ultimate goal is to create a new text. Although it might introduce more complexity in the already complex field of global content creation and quality evaluation, transcreation represents a new niche for human ingenuity and creativity - it is a part of the content evolution in the digital era. As an industry, we need to be have a common understanding and agreement on what is expected of this new translation format and what it should be measured against. TAUS is working together with a board consisting of multiple companies on formulating the Transcreation Best Practices.
Takeaways:
Milica is a marketing professional with over 10 years in the field. As TAUS Head of Product Marketing she manages the positioning and commercialization of TAUS data services and products, as well as the development of taus.net. Before joining TAUS in 2017, she worked in various roles at Booking.com, including localization management, project management, and content marketing. Milica holds two MAs in Dutch Language and Literature, from the University of Belgrade and Leiden University. She is passionate about continuously inventing new ways to teach languages.