By Annik Stahl
One cannot look at the future of tech comm without seeing we are moving into a global economy, and with that comes globalized content. Let’s look at some of the challenges and rewards inherent in writing for a global audience.
When people who are competent, adroit, and artistic at crafting words need to talk about translation effectiveness and quality, there are bound to be disagreements. Throw in some translation machine logic along with the folks who created the feature or product, and you’ve got the perfect storm. There is currently a lot of “blaming” going on between various players in the global content development chain:
- Translation reviewers often criticize source content with no measureable evidence of actual defects.
- Translation vendors, who review machine translations, nominate the lack of terms in a terms database as the source of translation issues.
- Developers and program managers charge that source writers, translators, and reviewers are taking up too much valuable time and energy (because it just seems so easy to do in their minds).
As the need for translated and localized content grows exponentially each year, separating content development from localization is troublesome in several regards. What makes sense in American English may translate into something completely unreadable and incomprehensible to a Japanese audience. Even professional localizers sometimes have a hard time deciphering the true intent of the content, causing a lot of back-and-forth between teams, cutting into efficiencies, and causing partial or even complete rewrites of the original content.
This article addresses the challenges to providing quality and consistent content that translations pose as we continue to expand, connect to, and communicate with the world at large.
Internationalizing at the Source
In the past, “source writers” weren’t always referred to as such; they were writers and editors who wrote the parent version of a topic, article, or training module and then handed it off to a localization team or maybe to a machine translation company and then … promptly forgot about it, moving on to the next thing. The onus was on the receiving end: it was up to the translators and localizers to parse the true meaning and intent of the content. Simply writing source content that suits the source audience and handing it off for machine translation, localization, or both, is not solving the problem of writing to a global audience in a scalable, reliable, and resourceful way.
While translating and localizing steps and processes are fairly straightforward objectives, it’s the nuances and ambiguous language that can change a help topic from something of quality in English to something mystifying in another language. What needs to happen is internationalization, the process of designing and writing content to facilitate localization for target audiences that vary in culture, region, or language. When you start with ideas (in the form of easily translated and localized words, phrases, and formatting) that make sense in all languages, you get consistency, accuracy, and efficiency.
While interesting language makes for engaging content, writers are encouraged to write internationalized English in order to facilitate quality translation and localization. It sets the groundwork for simpler translation and localization processes. But again, will all content feel essentially whitewashed and sterilized in the name of efficiency and scalability?
The trick is finding a balance between perfectly internationalized content and perfectly engaging content. The tools and tactics that can help are: 1) style guides (not just the Strunk and White variety) which must be developed and enforced, with buy-in from team members on both sides of the ocean; 2) a way to measure feedback from customers on how translations are working for them; and 3) machine learning. It takes a village to write for the global village.
Measuring Quality Versus Helpfulness
When it comes to measuring the impact of words, the science is not there yet. Beautiful, fluid writing may end up being just that: beautiful and fluid, but also useless and confusing. Translated words that are “correct” may be correct, but perhaps less correct when out of context with the words around it. Even perfectly translated content may not get the point across to the end user since context changes with language and location. The ingredients are there, but no one is sure how to work the oven.
When we talk about providing translated content to a global audience, we need a way to measure the success of that translation. While vendors of machine-translated content provide us with the Linguistic Quality Assurance (LQA) score, teams need to figure out how to measure the usefulness and quality of content that post-machine translation reviewers offer. “Satisfaction ratings” that customers provide are not telling the whole story; organizations need to find a way to gather deeper insights.
Maybe the quality of some machine jobs is “good enough” for the least-viewed content. For the rest of the content (the most-viewed stuff), we use post-machine editors to localize it, and to review the translation. But does that tell the whole story of “helpfulness?” And if source editors (and managers) don’t speak the languages being translated/localized, how do they assess performance and quality? Teams need to develop a mechanism to measure what’s most useful, most effective, and most germane to what the end user is trying to understand and accomplish.
Looking Forward by Working Backward
We need to consider customers first and determine what they need, instead of thinking about what we already know. The work schedule needs to start and end with our customers. Step A: What do they need? Step Z: Are they getting it? When all of us are so close to the content, it’s hard to take a step back and consider how to share our knowledge with people who aren’t necessarily like us, who don’t experience the world in the same way we do.
Great leaders—politicians, artists, teachers, writers, scientists, philosophers—make their living and spend their time thinking outside of themselves. It would behoove us to do the same as we work to improve the methods and processes we put in place to deliver quality worldwide content.
ANNIK STAHL (firstname.lastname@example.org) has worked as a writer, editor, and content strategist in the tech industry for nearly 20 years. For 10 years she was the voice and face of the Microsoft Crabby Office Lady, a humor/tech/advice column, blog, podcast, book, and series of videos targeted to Microsoft customers. After Crabby “retired,” Annik brought her skills and passion for explaining the hard stuff to Amazon, where she’s currently a content manager on the Marketplace team, supporting the company’s millions of worldwide third-party sellers.