LQA: How It Differs From Proofreading and What Each Layer Catches

Proofreading happens on text inside a translation platform (or, god forbid, right inside the source file on someone's laptop: the worst case scenario). LQA happens on the localized product itself. They catch different errors, so they are complementary rather than substitutes. If you are paying for one and not the other, you are either reading clean copy that breaks in production, or shipping in-context fixes on text that no one ever read end to end.

This post explains what each layer does, what each one misses, how to combine them, and how to size the spend. It is written for localization managers, content and loc buyers, and PMs evaluating their translation-QA stack. We assume you already manage a Crowdin, Lokalise, Phrase, or Smartcat pipeline and have heard the words LQA, LQT, and proofreading used in three different ways inside the same meeting.

LQA scoring uses a defined framework. The industry standard is MQM (Multidimensional Quality Metrics), and the output is an objective report rather than a verdict.

1. The terminology trap: LQA, LQT, proofreading, editing

Talk to three localization vendors this quarter and you will probably hear four definitions of the same thing. Before going further, here is how we will use the words in this post.

Proofreading. The final language pass on translated text inside a translation platform (Crowdin, Lokalise, Phrase, Smartcat, and so on). It catches typos, grammar, punctuation, capitalization, and stylistic issues. It is often performed without reference to the source text. The reviewer sees the target language only: clean, but blind to whether the meaning matches.

Editing. A more thorough pass that includes the source text alongside the translation. It checks accuracy, terminology, register, and overall fidelity. The line between proofreading and editing varies by vendor. Some bundle them; some bill separately; definitions drift. When in doubt, ask the vendor what their proofreading and editing service actually includes.

LQA, or Linguistic Quality Assurance. The umbrella term for ensuring the localized product meets quality standards in every language. It is performed in context, on a build, on a staging environment, or on a live product. It uses a defined severity rubric and produces a report where each issue is tagged with location, severity, and a proposed fix.

LQT, or Linguistic Quality Testing. In strict usage, LQT is the in-product testing step inside the broader LQA process. In practice, vendors and localization managers use "LQA" and "LQT" interchangeably, and most service catalogs treat them as synonyms. We will use LQA as the primary term in this post and call out LQT wherever the in-build testing aspect specifically matters.

The clean mental model:

Proofreading lives in the platform.
Editing lives in the platform with the source.
LQA / LQT lives in the product.

If you take only one thing from this post: those are different physical locations, and that is why they catch different errors.

2. Proofreading: what it actually does

A proofreading pass typically happens after the translator has delivered the target language file back into the translation platform. A second native speaker, the proofreader, opens the same file and goes segment by segment, line by line.

What they have access to:

  • The target language text (always)
  • The source text (only if the workflow defines this; many proofreading SOWs explicitly do not include source comparison, which is editing)
  • The project glossary
  • The style guide
  • Translation memory matches and concordance
  • Comments and queries from the translator

What they are looking for:

  • Typos and misspellings
  • Grammar and syntax errors
  • Punctuation, capitalization, formatting (within the segment)
  • Tone and register inconsistencies
  • Glossary deviations (when the platform highlights them)
  • Spelling of proper nouns, brand terms, product names

The pros of proofreading

Proofreading is fast and structured. Segment-by-segment review on a clean platform, with the linguistic assets right there. A proofreader can move through thousands of words a day.

It is cheap relative to what it catches. Per-word pricing, well-understood scope, easy to procure.

It leverages your language assets. Glossary, style guide, and translation memory all live in the platform and surface during the review. Proofreading is the moment those assets earn their keep.

It scales. Once a vendor knows your style guide and glossary, proofreading throughput scales with translator availability.

The cons of proofreading

Proofreading is blind to context. The reviewer sees a string in a list of strings. They do not see whether "Edit" will appear next to a pencil icon, or whether "Settings" will be a screen title or a button. That changes everything about the right translation.

It is blind to length. The platform does not show that the German for "Save changes" is 35 percent longer than the English, and that the button it goes into is fixed-width.

It is blind to layout, format, and locale. Date formats, currency placement, decimal separators, RTL flow, hyphenation rules: none of that is visible in a segment view. A proofreader can read perfect Arabic copy and miss that the entire screen flows the wrong way.

It is blind to dependencies. If string ID 4521 says "Welcome," and string ID 4522 (which appears beneath it on the welcome screen) introduces the user to a feature, the proofreader has no way to know the two strings need to flow as a paragraph in the target language. Concatenation issues are invisible.

Proofreading polishes the words. It does not see the screen.

For long-form static content like blog posts, knowledge-base articles, email templates, and marketing copy, proofreading is often enough. The unit of meaning is the paragraph, not the screen, and the platform view is a faithful representation of how the reader will encounter the text.

For anything that lives inside a product UI, it is not.

Need proofreading on long-form copy or LQA on a build?

Get a Quote

3. Editing: the in-between

Editing is usually defined as a with-source review pass. The reviewer reads the source segment, reads the translation, and assesses meaning, accuracy, and faithful register.

In a properly tiered workflow this looks like translator → editor → proofreader. In practice, especially with smaller volumes or budget pressure, the steps collapse. Editing and proofreading get bundled into a single second-language pass that some vendors call "review," others call "TEP" (Translation, Editing, Proofreading), and others just call "translation with QA."

The takeaway for buyers: the word "proofreading" alone may or may not include source comparison; ask before signing the SOW. Either way, neither editing nor proofreading sees the localized product. That is the gap LQA fills.

4. LQA: what it actually does

An LQA pass happens after the translated content has been built into the product: a mobile app build (.apk, TestFlight), a deployed staging environment, a published webpage, a game build, a video with subtitles burned in. The reviewer's environment is no longer the translation platform; it is the product itself, in the target language, on a target-region device or browser.

A native-speaker linguist runs through key user flows and screens, flagging anything that is wrong, awkward, or unsafe. They use the same translation platform as a fix-routing tool, but the issues are found against the live product.

What an LQA reviewer has access to:

  • The localized product, fully built and runnable
  • The original translated strings (via the platform)
  • The user flows and screens where each string actually appears
  • Locale-specific device settings, currencies, date formats
  • The full visual context: layout, fonts, icons, surrounding copy
  • Where applicable: test scenarios, defined critical paths, severity criteria

What they are looking for is a longer list than proofreading. We will spend the next section on it because that is where the value lives.

The pros of LQA

LQA catches what proofreading cannot.

It is objective and scorable. Findings are tagged with severity (Critical / Major / Minor / Preferential) using a defined rubric, so quality becomes a number you can track over time and across languages.

It plugs in. A third-party vendor can perform LQA on top of any existing translation pipeline, without re-onboarding translators or rebuilding glossary infrastructure. The vendor needs build access and the source files.

It surfaces upstream issues. A pattern of similar errors across multiple locales reveals problems in the source code (hard-coded strings, layout assumptions, missing internationalization) that no amount of translator-side work can fix.

The cons of LQA

LQA needs a build. Mobile teams need to provide an APK or a TestFlight build; web teams need a staging environment with locale switchers wired up; game teams need either a build or, ideally, a developer console with cheats to navigate quickly. If your build process is not ready, LQA cannot start.

It is slower per word. A linguist navigates through screens and flows, not through a list of strings. A typical mobile-app LQA pass might cover thousands of strings, but the reviewer hits each one in context, not at platform throughput.

It needs scenarios. To run efficiently, an LQA team needs at least a rough idea of which screens, flows, and strings matter most. Otherwise reviewers spend time on settings menus instead of checkout.

These costs are real. They buy a class of finding that platform-level review cannot produce, no matter how good the proofreader.

5. The errors LQA catches that proofreading cannot

Each of the issues below was invisible at the translation-platform level. Each one only surfaced when a native-speaker linguist saw the localized product on a target device.

Right-to-left orientation (Arabic, Hebrew, Farsi, Urdu)

A perfectly translated Arabic paragraph means nothing if the navigation flows left to right, the chat-bubble icons point the wrong direction, and the back button still sits on the left. RTL is an internationalization concern that lives in code, not in translation memory. The proofreader sees clean Arabic in the platform; the LQA reviewer sees the screen flowing wrong. Mirroring affects layout direction, icon orientation, animation direction, scrollbar placement, and even the order of paginated lists. None of that is visible to a segment-level review.

Date, time, and currency formatting

In French, the currency symbol comes after the amount: "0,19 USD/min" rather than "USD 0.19/min." In Japanese, dates use the character 月 for month, so "June" written as "6" without the suffix becomes nonsense. In German, decimal separators are commas; in English, periods. None of this can be checked at the segment level. The translator is given a string template and the locale-aware formatting is the runtime's job. But the runtime is often configured wrong, or the translator gets a literal 0.19 USD string and cannot fix the locale logic from inside the platform.

iOS lock-screen widget showing a Japanese date with month character 月 highlighted
A Japanese date that needs the 月 character after the day number. Invisible in a translation platform, obvious in the build.

Text expansion and overflow

Spanish, French, and German translations run 25 to 30 percent longer than English. A button labeled "Save" in English becomes "Sauvegarder" in French, "Speichern" in German, and "Guardar" in Spanish. If the button is fixed-width or nested inside a tight grid, the translation gets truncated, ellipsized, or wraps in ways that destroy the UX. The proofreader sees the correct word; the LQA reviewer sees it cut off.

French mobile app navigation bar with the word 'Recherher' truncated in a fixed-width tab
A French translation runs ~25 to 30% longer than English. Fixed-width navigation tabs do not forgive that.

Hard-coded layout assumptions

Code that adds spaces, line breaks, or punctuation in the host language frequently breaks in translation. French requires a non-breaking space before a colon: "Erreur : ..." rather than "Erreur: ...". If that space is hard-coded into the English source and not into the translation, every French string gets it wrong. It is a punctuation error visible to a French reader, invisible to an English-speaking developer.

French mobile app screen showing 'Activation immédiate :' with a hard-coded space before the colon
French requires a non-breaking space before a colon. Hard-coded source spacing breaks every locale where the rule differs.

Capitalization and string concatenation

Many products break long sentences into multiple strings ("You have ", count, " new messages") to allow runtime substitution. In languages with grammatical gender, declension, or different word order, this falls apart. A literal Russian translation of "You have count new messages" reads as broken Russian because the verb form depends on the count. Proofreaders see strings; LQA reviewers see the full sentence assembled.

German fertility-tracking app screen with the highlighted string '[LH-Anstieg]' showing a string-concatenation issue
When sentences are split across multiple strings for runtime substitution, capitalization and word order break in target languages.

Context-dependent gender and verb form

The English word "Finished" maps to "Terminé" (masculine) or "Terminée" (feminine) in French depending on what is finished. Without seeing the surrounding screen, the translator and proofreader have to guess. Inside the build, the LQA reviewer sees that "Finished" labels a thumbnail of a photo (image, feminine in French) and corrects the form. A proofreading pass would never have caught this.

French photo-gallery app showing the label 'Terminé' that should be 'Terminée' to agree with the feminine noun for image
The English word "Finished" needs gender agreement in French. Without seeing the screen, no translator and no proofreader can know it is labelling an image.

Verb tenses behave the same way. A Chinese reviewer flagging that an instruction was localized as a question rather than an imperative only catches that with full screen context.

Third-party module localization

Third-party UI components like chat widgets, payment forms, and analytics consent banners often render their own translations from their own translation files. They are frequently overlooked when localization scopes are defined, so they ship with English copy in an otherwise localized product. Only an LQA pass on the running app reveals it.

French sign-up form with an embedded English chat widget displaying 'Hi. Need any help?'
A perfectly localised French sign-up form, sabotaged by an English chat widget. Third-party modules render their own translations and are routinely missed in localisation scope.

Cultural appropriateness in context

A color, an image, a name, an idiom, a date that means something charged in the target market: these surface only when the product is seen as a whole. A proofreader reading the welcome screen copy cannot tell that the welcome screen image is a culturally inappropriate gesture in Japan. (For deeper cultural-fit work specifically, see our cultural analysis service, which complements LQA on launch-critical content.)

A proofreading pass cannot find any of the above. An LQA pass routinely finds all of them. That is why LQA is a distinct service.

Each of these findings goes into a structured report tagged with an MQM error category and a severity level. That is what turns LQA from opinion into a number you can track.

6. Scoring it: MQM and the severity rubric

LQA without a scoring framework is just opinion. The industry standard is MQM (Multidimensional Quality Metrics), a structured way to categorize translation errors so quality becomes a number you can track over time and across languages. Each finding gets two tags: an error category (terminology, accuracy, fluency, style, locale convention, design, internationalization, and so on) and a severity level. Together they produce a per-locale score that holds up under scrutiny.

The severity column does the most work in a triage meeting. Five buckets cover the cases that show up in practice:

  • Critical. User-facing errors with legal, safety, or branding implications, or anything that crashes or breaks the product.
  • Major. Accuracy errors that change meaning, button labels that misrepresent function, gross grammar in core UI.
  • Minor. Punctuation, formatting, or style issues that do not lose meaning.
  • Preferential. A different translation that is also correct. Logged for transparency, not counted against quality.
  • Blocker. Something that prevents further testing. For example, the app crashing when a locale is set, blocking review of subsequent screens.

A defensible LQA process produces numbers, not narratives. If your vendor delivers an LQA report that reads as a list of opinions without severity tags or category mapping, you do not have an LQA process. You have an expensive proofreading pass with extra steps.

If you want to see what an MQM-tagged report actually looks like before commissioning one, Alconost runs a free MQM annotation tool at alconost.mt: no registration, 16 MQM error categories, three severity levels, 120+ languages. Run a sample through it and the format speaks for itself.

7. What an actual LQA report looks like

A production LQA report is a structured deliverable, not an email with screenshots. The shape that works:

FieldWhat it contains
LocationWhere the issue was found: a link, a screen reference, or a screenshot
SeverityCritical / Major / Minor / Preferential / Blocker
Issue descriptionOne- or two-sentence explanation of the problem, plus a screenshot
Existing translationThe current target-language string
Proposed translationRecommended fix
CommentReviewer's reasoning, additional context, or queries
Fixed in [platform]Boolean: has the fix been applied to source?

Each row is one finding. A typical mobile-app LQA pass produces between 40 and 200 findings per language; web apps and enterprise SaaS often produce more. The report goes to the localization manager, who routes fixes to translators (in the platform) or to engineers (for code-level issues like layout, RTL handling, or third-party scope).

Over time, the severity tags reveal patterns: one engine consistently fails on terminology in Japanese, one translator's German has fluency issues, one screen accumulates findings every release because it has hard-coded English strings. That diagnostic value is hard to replicate any other way.

The dimension-and-severity tags align with MQM, so reports are interoperable with any MQM-based quality program your team already runs.

8. Outsourcing dynamics: LQA plugs in, proofreading needs onboarding

One reason LQA is so widely outsourced, even by teams with in-house translators, is that it is vendor-agnostic. Three implications follow.

LQA can sit on top of any existing pipeline. Whether your translation is done in-house, by a single agency, by a roster of freelancers, or via AI with light machine translation post-editing, LQA runs on the output (the build), not on the workflow upstream. You do not need to re-onboard your translators, expose your TMs, or change your platform.

Onboarding is light. An LQA vendor needs build access (APK / TestFlight / staging URL), a list of locales in scope, and ideally a rough scenario list (key screens, conversion flows, regulated content). They can be productive within days rather than weeks.

Proofreading is the opposite. Effective proofreading needs the vendor to know your style guide, your glossary, your terminology, and your brand voice. That knowledge takes time to build and is sticky. Switching proofreading vendors mid-project is painful. LQA findings are objective enough that switching LQA vendors is mostly an exercise in standardizing the report format.

This is why most mature programs run proofreading with a small, deeply trained set of preferred linguists, and run LQA with a separate vendor (or as periodic third-party audits). The two pull different levers and have different switching costs.

9. The hybrid pattern: when to use what

The wrong question is "LQA or proofreading?" The right question is "what mix?"

A pragmatic pattern:

  • Long-form static content (blog, knowledge base, marketing emails, legal copy that does not render in a UI): proofreading is the primary review layer. The unit of meaning is the paragraph, the platform view is faithful, and LQA adds little.
  • Business-critical UI flows (sign-up, checkout, payment, onboarding, cancellation, data deletion, regulated disclosures): LQA is mandatory. These are the flows where errors translate directly to lost conversions, support tickets, or regulatory exposure. Proofreading alone is not enough.
  • Standard UI (settings, profile, peripheral flows): LQA on a sample, typically 20 to 40 percent of strings, prioritized by traffic and recency of change. Proofreading covers the rest at the platform level.
  • Marketing landing pages with embedded UI: both. Proofreading on the long-form copy, LQA on the in-page CTAs and forms.
  • Highly dynamic UI (string concatenation, gendered forms, RTL languages, currency-heavy screens): LQA always; proofreading optional.

A note on cost. Proofreading is usually priced per word: predictable and linear with content volume. LQA is usually priced by the hour: how long it takes a linguist to walk through your scenarios, reproduce the issues, log them with screenshots, and write up the report. The hourly rate is comparable to proofreading; the budget is driven by scenario complexity, reporting workload, and how reproducible the issues are. A direct per-word comparison between the two is misleading, and the right comparison is per error caught.

One thing worth stressing. A typical localization project is a mix of legal copy, marketing, UI, landing pages, documentation, and so on. Adding LQA does not mean adding it to all of that. If you scope LQA to the UI, the conversion-critical landing pages, and the checkout flow (the slices where build-level errors actually hurt), the LQA share of the total localization budget stays small. Most of the words on most projects do not need LQA, which is why a sensible LQA program is usually a smaller line item than buyers expect.

LQA vs. proofreading summary card: proofreading lives in the platform and catches typos, grammar, and style; LQA lives in the product and catches RTL and layout overflow, locale formats and gender, and third-party modules
The split in one frame: proofreading lives in the platform; LQA lives in the product.

Want LQA on your next release without rebuilding your pipeline?

Get a Quote

10. What about AI translation?

If your translation upstream is AI-driven (raw MT or LLM-based), the layering question gets sharper, not simpler. AI output reads more fluently than its accuracy warrants. It looks polished while still carrying terminology drift, locale-convention errors, register problems, and outright mistranslations that platform-level reading does not reliably catch.

This is also why LQA is not the right first layer on top of raw AI. LQA is a precision instrument for the issues that only show up in a build (RTL flow, layout overflow, gender, concatenation, third-party widgets). If raw AI output goes straight to LQA, the report fills up with translation issues that should have been fixed upstream, and the LQA budget is spent reporting them instead of catching what only LQA can find.

The right layering is:

  • Long-form static content (knowledge base, help articles, marketing copy): AI plus post-editing, where a human reviewer works in the translation platform with the source text and fixes the systemic AI issues. No LQA needed for content that does not render in a UI.
  • UI and business-critical flows (sign-up, checkout, onboarding, regulated copy, in-product strings): AI plus post-editing plus LQA. Post-editing first, in the platform, to clean up the translation. LQA second, on the build, for the build-level issues post-editing structurally cannot see.

This is why we do not position raw AI as a complete localisation solution. Alconost is a human translation company specialised in MTPE (AI-assisted translation with native-speaker review), and we run LQA as a separate, build-level layer on top of an already-edited translation, not on top of raw AI output.

We will dig into the layered review patterns for AI translation, including when post-editing is enough, when LQA is mandatory, and how to add LQA to an existing AI pipeline without rebuilding it, in the next post in this cluster: AI Translated Your Product. Now What? The Right Review Layer for AI Output (coming soon).

11. How Alconost runs LQA

Our LQA service, internally called LQT when scoped to in-product testing, runs in four stages. We use the same pattern for mobile apps, games, web apps, and SaaS platforms. The deliverables differ but the structure is consistent.

Stage 1. Build preparation. For mobile, we install the .apk file or TestFlight build on our testing devices and verify build stability and access to all features. For web and SaaS, we get staging access and a locale switcher we can drive. For games, we get a build with cheats or a developer console for fast navigation through key flows.

Stage 2. Test scenarios (optional but recommended). We review or co-author a list of test scenarios covering key user flows, gameplay elements, conversion paths, and UI interactions. For products with monetization, microtransactions, or regulated content, scenarios pay for themselves several times over by ensuring reviewers spend time where errors hurt most.

Stage 3. Translator-tester setup. We assemble a team of native-speaker linguists with experience in the relevant domain (game genre, SaaS vertical, regulatory environment) and brief them on the product, scenarios, severity criteria, and reporting format.

Stage 4. Testing and reporting. Linguists run through the product in their target language, log issues with location, MQM error category, severity, current text, proposed fix, and screenshot. The full report goes to the localization manager. Optional add-ons:

  • Translation adjustments applied directly to the source files (when we have access to the platform)
  • Direct reporting in the client's bug tracker (Jira, Linear, GitHub Issues)

The resulting report is the objective, scorable artifact your team can track over time, brief future translators with, and use to triage upstream code-level issues.

If you are running localization without LQA today, the first audit is usually the one that pays for the next two years of the program, both in errors caught and in upstream patterns surfaced. We are happy to scope a first-pass audit on whatever build you have running today: request a quote.

FAQ

Is LQA the same thing as LQT?
In strict usage, LQT (Linguistic Quality Testing) is the in-product testing step inside the broader LQA process. In practice, vendors and localization managers use the two terms interchangeably, and most service catalogs treat them as synonyms. Whichever label your vendor uses, what matters is whether the review happens against the actual build, not whether the platform calls it "LQA," "LQT," or "linguistic testing."
If we already pay for proofreading, do we also need LQA?
For long-form static content (blog posts, knowledge-base articles, marketing emails), proofreading is usually enough. For anything that lives inside a UI (sign-up flows, checkout, settings, in-app onboarding), proofreading alone is not enough, because it cannot see the screen. Most mature programs run proofreading and editing on long-form copy and LQA on the product itself.
How much does LQA cost compared to proofreading?
Proofreading is usually priced per word; LQA is usually priced by the hour, since the time per locale depends on scenario complexity, reporting workload, and how reproducible the issues are. The hourly rate is broadly comparable to proofreading, so a direct per-word comparison is misleading. The honest comparison is per error caught: LQA finds a class of issue that proofreading structurally cannot (RTL flow, layout overflow, locale formats, gendered concatenation, third-party widget gaps), and one shipped error in a conversion-critical screen often costs more than an entire LQA pass.
Will adding LQA blow up my localization budget?
Usually not. A typical localization project is a mix of legal copy, marketing, UI, landing pages, documentation, and other content, and LQA only goes on the slices where build-level errors actually hurt: UI, conversion-critical landing pages, checkout, and regulated flows. Most of the words on most projects do not need LQA, so the LQA share of the total localization budget stays small. Scope it to where it pays back, not across everything.
Does LQA replace proofreading?
No. They catch fundamentally different errors and run on different artefacts: proofreading on the strings inside your translation platform, LQA on the running product. The right question is not "LQA or proofreading?" but "what mix?" Long-form static content gets proofreading; business-critical UI flows get LQA; standard UI gets a sample.
What does an LQA report actually contain?
A production LQA report is a structured deliverable, not an email with screenshots. Each finding lists location (link or screenshot), severity (Critical / Major / Minor / Preferential / Blocker), the existing translation, the proposed fix, a reviewer comment, and a status field for whether the fix has been applied in the translation platform. A typical mobile-app pass produces 40 to 200 findings per language.
Can LQA be added to an existing AI translation pipeline?
Yes, and LQA plugs in without changing the upstream workflow. You do not need to switch translators, expose your TM, or change your platform. The LQA team needs build access, the locale list, and ideally a rough scenario list. One important caveat: LQA is the wrong first layer on raw AI output. Run a post-editing pass in the translation platform first to clean up terminology drift, register, and accuracy, then run LQA on the build for the issues only the build can reveal (RTL flow, layout overflow, gender, concatenation, third-party widgets). LQA on top of raw AI fills the report with translation issues that should have been fixed upstream and spends the LQA budget reporting them instead of catching what only LQA can find.
About the Author
Ilya Spiridonov
Ilya Spiridonov
Chief Commercial Officer, Alconost

Ilya has spent 10+ years helping companies scale globally through localization. As CCO at Alconost, he works directly with enterprise and SaaS clients on localization strategy, vendor selection, and ROI optimization.


Related Articles

Our Work

See how we help global companies scale their reach.

JetBrains
Software

JetBrains

1,000,000+ words localized into JA, ES, ZH-CN, KO, PT-BR, FR, TR, CS, RU

JetBrains / YouTrack & Hub
Software

JetBrains / YouTrack & Hub

Localization of Jetbrains' products Youtrack and Hub

Microsoft MakeCode
Software

Microsoft MakeCode

Localization of Microsoft MakeCode

TikTok
Mobile Apps

TikTok

100,000+ words localized into NL, FIL, FI, FR, DE, HE, IT, ES, SV, FR-CA, ES-MX for ByteDance

Viber
Mobile Apps

Viber

Localization of Viber messenger

Read case study
GitHub
Software

GitHub

Translation of GitHub guides and materials

Zendesk
Software

Zendesk

Zendesk Knowledge Base localization for multilingual customer support

Read case study
Airalo
Mobile Apps

Airalo

25,000 words localized into AR, ZH-CN, CS, FIL, FR, DE, EL, HE, HI, IT, JA, KO, PL, PT-BR, RU, ES-419, TH, TR, UK

Choco
Mobile Apps

Choco

15 000+ words localized into JA, KO, VI-VN, IT, PL, NL, PT-BR, CS, CA, ZH-CN

Bitrix24
Websites

Bitrix24

100 000 words and counting localized into ES, PT-BR, JA, ZH-CN and 11 more

Harvard University
E-Learning

Harvard University

Localization of online courses for Harvard University

Xsolla
Games

Xsolla

Localization of Xsolla products

Read case study
SafetyCulture
Software

SafetyCulture

5 000–8 000 words per month localized into JA, PL, TH, TA, SV-SE, VI, UK, ID, HI, KO, NO, PT-PT, RO, RU, TR, AR, BN, ZH-CN, ZH-TW, DA, FI, IT, DE, NL, FR, ES-ES, PT-BR, ES-MX

Veriff
Software

Veriff

12 000 words per month localized into ES-MX, ES-419, SO, SI-LK, VI, SL, SK, SR-CS, RO, PT-PT, PL, MS, MK, LT, LV, JA, HI, DE, KA, FR, FIL, NL, ZH-TW, ZH-CN, CA, BG, BN, ES-ES, PT-BR

Bitrix24 / Voice Responses
Media

Bitrix24 / Voice Responses

into DE, EN, ES, PT-BR, RU, UK

Read case study
DocuWare
Software

DocuWare

90,000 words localized into 23 languages for cloud & on-premises document management

Gartic Phone
Games

Gartic Phone

5 000 words localized into JA, AR, TH, CS, ID, FR, DE, ZH-CN, IT, NL, SV, RO, KA, FA, AZ

Bandsintown
Software

Bandsintown

Localization of Bandsintown app

Read case study
Aviasales
Websites

Aviasales

100,000 words localized into 12 languages for flight search platform

Endomondo
Software

Endomondo

10 000 words and counting localized into CS, HI, NO, TR and 12 more

Liferay
Software

Liferay

Localization of Liferay Platform

BattleTech
Games

BattleTech

Localization of the Battletech game

Goat Simulator
Games

Goat Simulator

Localization of the Goat Simulator game

Stellaris
Games

Stellaris

Localization of the Stellaris game

Movavi
Mobile Apps

Movavi

100,000+ words localized into 20+ languages for video editing software

Parimatch
Software

Parimatch

200,000+ words localized into FR, FR-CA, DE, HI, IT, JA, PL, PT, PT-BR, ES, ES-MX, TR for betting platform

Prequel
Mobile Apps

Prequel

Expanded Top-10 photo editing app to 100M+ Gen Z users worldwide

Ultimate Guitar
Mobile Apps

Ultimate Guitar

4,000 words localized into ES with LQA for Muse Group's guitar app

Wildlife Studios
Games

Wildlife Studios

75 000+ localized into FR, DE, IT, KO, RU, TR, PT-BR, ES-MX, UK, RO, AR

App in the Air
Mobile Apps

App in the Air

500,000+ words localized into PT-BR, PT, NL, KO, HI, FR, ES, SV, IT, TR, JA, AR, DE, ZH-CN, ZH-TW

Apptweak
Mobile Apps

Apptweak

100,000 words localized into JA, KO, ZH, FR for ASO analytics platform

Discourse
E-Learning

Discourse

55,000 words localized into ZH-CN, PT-BR, IT, FR, DE, AR, FI, JA, ES for open-source forum platform

Gcore
Software

Gcore

100 000+ words localized into ZH-CN, DE, ES, PT-BR

Grand Hotel Mania
Games

Grand Hotel Mania

100,000+ words localized from RU into 20 languages for hotel simulator game by Deuscraft

IllFonic
Games

IllFonic

IllFonic Inc.

InterSystems
E-Learning

InterSystems

550+ words localized into ES, FR, PT-BR, ZH-CN, JA

Kissflow
E-Learning

Kissflow

140,000+ words localized into IT, TH for low-code/no-code work platform

Klondike
Games

Klondike

50,000 words localized into DE, ES, IT, FR, PL, NL, JA, KO, ZH-CN, ZH-TW, PT-BR for VIZOR APPS

Clue
Software

Clue

Localization of Clue mobile app

Read case study
Dacadoo
Mobile Apps

Dacadoo

100,000+ words localized into 17 languages for digital health platform

My Cafe
Games

My Cafe

400 000 words and counting localized into FR, ES, PT-BR, KO and 6 more

Party Hard
Games

Party Hard

Localization of the Party Hard game

Planner 5D
Mobile Apps

Planner 5D

20,000 words localized into 24 languages for home design app

Punch Club
Games

Punch Club

20 000 words localized into ZH-CN, PL

Read case study
RICOH360 Tours
Software

RICOH360 Tours

18 000 characters localized into Japanese –> English, German, French, Spanish, Dutch

Sumsub
Software

Sumsub

7,000 words localized into 28 languages for identity verification platform

Transporeon
Software

Transporeon

50,000 words localized into 18 languages for logistics visibility platform

Aktiia
Websites

Aktiia

21,000 words localized into FR, DE, IT for blood pressure monitoring startup

Awarefy
Mobile Apps

Awarefy

30 000 characters localized into Japanese –> English

Baby Tracker
Mobile Apps

Baby Tracker

5 000 words localized into ES-LA, PT-BR, DE, UK

Circuit
Mobile Apps

Circuit

5,000 words localized into 30+ languages for delivery route planning app

CSAT
Software

CSAT

200 000+ words localized into AR, HE, IT, KO, PL, PT-BR, PT, TR, ZH-CN

Driivz
Software

Driivz

1 300 words localized into HR, CS, ET, FI, FR, FR-CA, DE, EL, HU, IS, IT, LV, LT, NO, PL, RO, SK, SL, ES-ES, SV

Foodback
E-Learning

Foodback

50,000 words localized into 12 languages for restaurant feedback platform

Gentler Streak
Mobile Apps

Gentler Streak

2 000 words per month localized into FR, DE, IT, ZH, ZH-HK, JA, KO

Harvest Land / Paris: City Adventure
Games

Harvest Land / Paris: City Adventure

200,000+ words localized from RU into 8 languages for Mysterytag games

Harvest Land
Games

Harvest Land

2 000 words per month localized into RU → EN, ES, PT-PT, FR, IT, DE, KO, JA, ZH

Hotel Life
Games

Hotel Life

12,000 words localized into 10 languages for hotel simulation game by Eidolon

HUB Parking
Software

HUB Parking

62,000 words localized into RU for smart parking solutions

Keenetic
Websites

Keenetic

30,000 words localized into PL, ES, FR, DE, SV, PT, IT for Wi-Fi router manufacturer

Charm Farm
Games

Charm Farm

Localization of Charm Farm Game

Read case study
Zombie Castaways
Games

Zombie Castaways

Localization of the Zombie Castaways game

Meisterplan
Software

Meisterplan

74,500 words localized into ES, FR, DE for project portfolio management

Onde
Mobile Apps

Onde

up to 1 000 words per month localized into SV, RW, DA, SQ, PL, KM, ET, MY, ZH-HANS, FI, DE, LV, HE, NL, HR, SK, NO, LT, IT, TH, SO, ID, IS, UR-PK, ZH-HANT, CS, UK and 10 more

OpenProject
Software

OpenProject

1 000+ words per month, up to 150 000 localized into FR, ZH-CN, ES-ES, IT, PL, PT-PT, PT-BR, KO, UK

Pillow
Software

Pillow

100,000+ words localized into 13 languages for sleep tracking app by Neybox

Playwing
Games

Playwing

40 000+ words localized into AF, AR, BN, MY, HR, CS, NL, ET, FR, KA, DE, EL, HU, ID, MS, PL, PT, RU, SK, ES, SV, TH

Clash of Kings
Games

Clash of Kings

Proofreading of in-game text for Clash of Kings

Read case study
Soundiiz
Software

Soundiiz

15,000+ words localized into 14 languages for music playlist transfer app

Speakap
Mobile Apps

Speakap

5,000 words localized into DE, NL, ES for employee communication app

Stripo
Websites

Stripo

25 000 words localized into PT-BR, TR, CS, FR, DE, IT, ES, PL, ZH-TW, NL, SL

Sufio
Mobile Apps

Sufio

3 000 words localized into FR, DE

Tonsser
Mobile Apps

Tonsser

40,000 words localized into ES-US, PT, SV, DE for football community app

Vizor
Games

Vizor

into ES-ES, NL, PL, ZH-CN, ZH-TW, PT-BR, IT, KO, FR, DE

Read case study
Alvadi
E-Commerce

Alvadi

Multilingual SEO for automotive supplier expanding to 30+ markets

BoxHero
Software

BoxHero

10000 localized into ES-419, ZH-CN, ZH-TW

Epic Roller Coasters
Games

Epic Roller Coasters

4,000 words localized into ZH-CN, FR, DE, JA, KO, RU, ES for VR game by B4T Games

Dating Apps Bundle
Mobile Apps

Dating Apps Bundle

50,000 words localized into 36+ languages for Red Panda Labs dating apps

Face Yoga
Mobile Apps

Face Yoga

2,000 words localized into ES-419, PT-BR for skincare app by Tepluhab

Forest Bounty
Games

Forest Bounty

10,000 words localized from RU/EN into ES, FR, PL, PT-BR for VigrGames

HUD App
Software

HUD App

10,000 words localized into 18 languages for dating app

DreamCommerce
E-Commerce

DreamCommerce

Localization of DreamCommerce Platform

Read case study
Jooble
E-Learning

Jooble

10 000 words localized into ES, PT, KO, JA and 11 more

Read case study
Smarty CRM
Software

Smarty CRM

Localization of Smarty CRM platform

Read case study
Targetprocess
Software

Targetprocess

Localization of Targetprocess platform

Mahjong Treasure Quest
Games

Mahjong Treasure Quest

30 000 words localized into EN → JA, MTPE: EN → PL, NL, KO, ZH-CN, ZH-TW, DE, FR

Primagest
Websites

Primagest

80 000 characters localized into JA → EN, ZH

Raymy
Software

Raymy

80 000 characters localized into Japanese –> English, Chinese (traditional),Vietnamese, Hindi

Sana Commerce
E-Commerce

Sana Commerce

Bi-weekly B2B e-commerce platform updates in 22 languages

Swappy Dog
Games

Swappy Dog

25,000 words localized from RU into 19 languages for match-3 game by Funmatica

Swoo
Mobile Apps

Swoo

30,000 words localized into ES, IT, PT for digital wallet app by CARDS/MOBILE

EnjoyGaming
Games

EnjoyGaming

500 words per month localized into DE, ES, FR, HI, IT, JA, KO, PT, PT-BR, RU, SV, TR, UK

2Solar
Software

2Solar

10,500 words localized into DE for solar software platform

24 Hour Home Care
Software

24 Hour Home Care

2,590 words localized into ES-419 for healthcare staffing company

ActiveMap
Software

ActiveMap

18 000 words localized into AR

Adizes Institute
E-Learning

Adizes Institute

5,850 words localized into HE for leadership consulting platform

AI Chat Smith
E-Learning

AI Chat Smith

1 500 words per month localized into ES, JA, RU, ZH, DE, FR, PT-BR

Alice VR
Media

Alice VR

8 phrases localized into CA, EN, ES, RU

Read case study
Appewa
E-Learning

Appewa

100+ words localized into 20 languages for language learning app by Lithium Lab

Associations
Games

Associations

3 000 words localized into TR, PL, SV-SE, NO, DA, CS, SK, HU, JA, KO, and 7 more

Aviloo
Software

Aviloo

5,000 words MTPE from DE into DA, NL, FR, IT, SV, NO for EV battery diagnostics

Read case study
Berry Factory Tycoon
Games

Berry Factory Tycoon

1 500 words every two months localized into RU → EN, KO, JA

BestChange
Websites

BestChange

2 000 words per month localized into NL, PL, SV

Blink
E-Learning

Blink

32 300 words localized into FR

Bunny Boom
Games

Bunny Boom

3 000 words localized into DE, ES, FR, IT, JA, KO, PT-BR

Life is Feudal
Media

Life is Feudal

Character voiceovers for Life is Feudal: Your Own

Read case study
Cosmos VR
Media

Cosmos VR

2 000 words localized into CA, DE, EN, ES

Read case study
Darksy Cleaner
Mobile Apps

Darksy Cleaner

1,400 words localized into 9 languages for iOS photo cleaner app

Days After
Games

Days After

500 words every 1.5 months localized into RU → EN, PT-BR, ES; EN → DE, FR, KO, AR, ZH-TW, ZH-CN, NO, PL, TH, CS, JA and 10 more languages on demand

Dople
Software

Dople

11 500 characters with space localized into KO → JA

eSIM Provider
Websites

eSIM Provider

around 30 000 words when requested localized into SQ, AR, HU, IT, IS, NL, FR, DE

EXR
Games

EXR

12 000 words localized into ES, FR

GoodCrypto
Software

GoodCrypto

2 000 words per month localized into AR, ZH, FR, DE, ID, IT, KO, PT-BR, ES, TR, VI

Haiku
Games

Haiku

10 000+ words localized into ES-419, PT-BR, DE, JA, ZH-CN

Impulse
E-Learning

Impulse

Impulse - Brain Training

IQ Dungeon
Games

IQ Dungeon

IQ Dungeon - Riddle Solving RPG

Knights and Brides
Games

Knights and Brides

Knights & Brides

Lexilize
E-Learning

Lexilize

7 000 words localized into FR

Darklings
Games

Darklings

1 000 words localized into JA, ZH, ES, RU, IT, FR, DE, PT, KO

Kill Shot Bravo
Games

Kill Shot Bravo

Localization of Kill Shot Bravo

Next Stop
Games

Next Stop

7 500 words localized into FR, DE, EN, JA

EcoCity
Games

EcoCity

Localization of the EcoCity game

Forced Showdown
Games

Forced Showdown

Localization of the Forced Showdown game

Minion Masters
Games

Minion Masters

Localization of the Minion Masters game

Outpost Zero
Games

Outpost Zero

Localization of the Outpost Zero game

Streets of Rogue
Games

Streets of Rogue

Localization of the Streets of Rogue game

Tamadog
Games

Tamadog

Localization of the Tamadog game

Valentine's Day
Games

Valentine's Day

into DE, FR, IT, ES, PT-BR

Mimic Logic
Games

Mimic Logic

13 000 characters localized into JA → EN, ZH-CN

Mini Golf 100+
Games

Mini Golf 100+

10 000 characters localized into Japanese –> English, German, French, Spanish, Korean, Chinese (tw), Chinese (zh), Portuguese (Brazil)

Mini Mini Farm
Games

Mini Mini Farm

8 500 characters localized into Japanese –> English

mod.io
Games

mod.io

500 words localized into ZH-TW, ZH-CN, DE, IT, JA, KO, PL, RU, ES

MySignature
Websites

MySignature

1 500 words per month localized into IT, FR, NL, FI, PL, DE, ES, PT

Parasite Days
Games

Parasite Days

70 000 characters localized into Japanese –> English

PDIS
Software

PDIS

2 346 characters with space localized into KO → EN

PosterMyWall
E-Learning

PosterMyWall

1 000 words per month localized into ZH-HANS, DA, NL, FR, DE, ID, IT, PL, PT, RU, ES, TH

Prospre
Software

Prospre

7 000 words localized into ZH-CN, FR, DE, IT, JA, PT-BR, ES-419

Ruins Magus
Games

Ruins Magus

38 000 characters localized into Japanese –> English

Samedi Manor
Games

Samedi Manor

2,000 words localized from RU into 7 languages for idle game by Black Caviar Games

Soltec Health
E-Learning

Soltec Health

17 000 words per 6 months localized into JA

Soma Development
Software

Soma Development

8 000 words localized into AR, ZH-CN, FR, DE, ID, IT, JA, PT, RU, VI, ES-419

Sonnet of Wizard
Games

Sonnet of Wizard

224 261 characters localized into Japanese –> English

Sportplus
Websites

Sportplus

800 words localized into AR, HI

Hotel Project
Games

Hotel Project

3,622 words localized into PT-BR for merge game by Next Epic

Tovie AI
Software

Tovie AI

4,800 words localized into ES, PT-BR for conversational AI platform

Ultight
Software

Ultight

5 046 characters with spaces localized into KO → EN

Underground Waifus
Games

Underground Waifus

4 300 words localized into JA, ZH-CN, KO, FR, IT, DE

UNNI
Software

UNNI

15 000 words per month localized into TH

Vlad & Niki
Games

Vlad & Niki

15,000 words localized into 10 languages for kids claymation game by RUD present

Kerish Doctor
Media

Kerish Doctor

Voiceovers for the Kerish Doctor software

Read case study
Welcome Bot
Software

Welcome Bot

2 000 words localized into UK, LT, AR, ES, FR, DE, PT, IT, PL, HE, ID, TR, HI, VI, MS, TH, CS, NL

WRD
Media

WRD

WRD – Learn Words App Voiceover

Read case study
Azur Games
Games

Azur Games

200 – 500 words per order localized into ID, PL, IT, TR, ZH-CN, ZH-TW, KO, PT-BR, JA, FR, ES, DE, TH, HI

Conf.app
Software

Conf.app

4,500 words localized into IT, ZH-CN, PT-BR, DE, ES for event management app

Character Bank
Software

Character Bank

Localization for Character Bank software platform

Coffee Break
Software

Coffee Break

Localization for Coffee Break software platform

Google
Software

Google

Localization for Google

GROOVE
Software

GROOVE

Localization for GROOVE X

Hakali
Software

Hakali

Localization for Hakali

Request a Quote

Whether you're launching in new markets or scaling existing localization — let's make it happen.

This field is required
This field is required
Please enter a valid email address
Please enter a valid phone number
This field is required
This field is required