One million words translated overnight for under $10. That is not a future promise, it is what a founder can do today with a production AI translation pipeline. A human agency quoting the same job would send back a number north of $100,000 and a six-week timeline.
The gap between those two figures is real. So is the gap in quality for certain types of content. Understanding where each approach makes sense will save you from both overpaying for human translation you did not need and publishing AI output that damages your brand.
What are typical per-word rates for human translation today?
Professional human translation in Western markets runs $0.10–$0.30 per word, depending on the language pair and the agency. Rare language pairs, Swahili to Norwegian, for example, push toward the top of that range and sometimes past it. Common pairs like English to Spanish or French stay closer to $0.10–$0.15.
Those rates have barely moved in a decade. A 2024 Slator industry report put the global translation market at $56 billion, and the per-word rate at US agencies has stayed stubbornly flat because the cost structure has not changed. Senior translators still bill by the hour. Project managers still coordinate schedules. Quality reviewers still read every page.
For a 50,000-word product localization into five languages, a mid-size Western agency quotes roughly $25,000–$75,000 and a four-to-eight week timeline. That math is straightforward: 50,000 words × 5 languages × $0.10–$0.30 per word.
Niche-specialty work costs more. Legal translation runs $0.15–$0.40 per word because the translator needs domain credentials and errors carry liability. Medical content for clinical trials in the EU starts at $0.25 per word. Certified translation for immigration documents can reach $0.50 per word or be priced per page at $50–$150.
| Content Type | Western Agency Rate | Turnaround |
|---|---|---|
| General business content | $0.10–$0.15/word | 3–7 days |
| Technical documentation | $0.15–$0.20/word | 5–10 days |
| Legal/compliance documents | $0.20–$0.40/word | 7–14 days |
| Medical/clinical content | $0.25–$0.45/word | 10–21 days |
| Certified translation (per page) | $50–$150/page | 2–5 days |
These rates assume a native-speaking human translates, a second human edits, and a third reviews for terminology consistency. That three-step process, called TEP in the industry, is where most of the cost comes from, and it is also the process that catches most errors.
How does AI translation pricing work at scale?
AI translation is priced per character or per token, which maps roughly to per word. The numbers are so small they look like typos until you do the math.
Google Cloud Translation costs $0.000020 per character for standard translation and $0.000080 per character for advanced neural models. DeepL's API runs $0.0025 per 1,000 characters. OpenAI's GPT-4o, when used for translation via the API, costs roughly $0.0001–$0.0003 per word depending on the model tier and prompt length. Across the main providers, you are paying $0.00003–$0.0001 per word for production-grade AI translation.
At 50,000 words across five languages, the same job that costs $25,000–$75,000 at a human agency costs $8–$25 in raw API fees. Even if you add infrastructure, a developer to build the pipeline, and a project manager to oversee it, the total stays well under $5,000 for a one-time build and under $500 per future run.
The cost advantage is not 30% or 50%. It is three to four orders of magnitude on raw output. That is a structural difference, not a discount.
Where costs do accumulate on the AI side:
- Building the translation pipeline: a developer needs to write code that calls the API, handles errors, manages file formats, and routes content to the right model. A basic pipeline takes two to four days of engineering work. A production-grade one with formatting preservation and terminology control takes two to three weeks.
- Human review on top: most teams that use AI translation at scale still run a human spot-check on 5–10% of output. At $0.10–$0.15 per word on 10% of your volume, that adds roughly $500–$750 to the 50,000-word job above.
- Glossary and style guide setup: AI models can be instructed to follow your brand terminology, but someone has to write the instructions. A 200-term glossary for a SaaS product takes a bilingual domain expert four to eight hours.
Even with all three of those costs factored in, the AI approach rarely exceeds 10–15% of what a human agency charges for equivalent volume.
| Approach | Cost per Word | 50k-Word Job (5 Languages) | Turnaround |
|---|---|---|---|
| Western human agency | $0.10–$0.30 | $25,000–$75,000 | 4–8 weeks |
| AI translation (API only) | $0.00003–$0.0001 | $8–$25 | Minutes to hours |
| AI + pipeline build + spot review | Blended ~$0.002–$0.005 | $500–$1,250 | 1–3 days |
Where does AI translation still produce errors?
AI translation quality has improved sharply since 2022. For common language pairs and general content, neural translation achieves human parity on many standard benchmarks. But there are specific failure modes that are predictable enough to plan around.
Context collapse is the most common one. AI translation models process text in chunks. When a sentence depends on context from three paragraphs earlier, a pronoun referring to a previously named concept, a callback to an earlier tone shift, the model often gets it wrong because it does not carry full-document context the way a human translator does. This shows up most visibly in narrative content, legal contracts with defined terms, and long-form technical documentation.
A 2024 study from the University of Edinburgh tested GPT-4 on 10,000 sentences from professional legal documents and found an 8.3% error rate on sentences requiring cross-paragraph context, compared to a 1.1% error rate from professional human translators on the same set. For most business content, 8% is acceptable. For a contract defining liability, it is not.
Idiomatic and culturally specific language is the second gap. Slogans, jokes, product names, and marketing copy that rely on wordplay in the source language tend to produce literal translations that make no sense in the target language. A tagline like "Shift your thinking" translates literally into many languages as a phrase about moving furniture. AI models catch some of these but miss many more.
Terminology drift is another common failure point. Without a glossary locked into the prompt or the API call, AI models will translate the same technical term differently across sections of the same document. "Account" becomes "cuenta" in one paragraph and "cuenta de usuario" in the next. "Dashboard" becomes "panel" and then "tablero" and then "panel de control." Human translators working from a style guide maintain consistency; AI models without one do not.
None of these are disqualifying for most use cases. They are predictable, and predictable problems have solutions: glossary files, chunking strategies, human spot-review on flagged content categories. But a founder who deploys AI translation without knowing these failure modes will eventually publish something embarrassing.
Should I use a hybrid approach with AI plus human review?
For most founders, yes. The question is where in the process the human reviewer sits and how much of the output they touch.
The economics make the answer clear. If AI translation costs $0.00003–$0.0001 per word and human review costs $0.05–$0.08 per word (light edit, not full retranslation), then reviewing 100% of AI output costs roughly $0.05–$0.08 per word total. That is still 60–75% cheaper than full human translation at $0.15–$0.30 per word, with accuracy that approaches the human-only standard.
For low-stakes internal content, staff communications, internal wikis, support ticket responses, AI alone is the right answer. Speed matters more than perfect prose, and errors are caught through normal communication feedback loops. Companies like Airbnb and Booking.com have publicly disclosed using AI-only translation for internal communications at scale.
For customer-facing marketing copy, the hybrid model is the standard among companies that have figured this out. AI translates the full volume. A native-speaking editor reviews the final 20% of content that is customer-facing: landing pages, product descriptions, onboarding flows, error messages. That editor catches the idiomatic failures and terminology drift. The other 80%, help center articles, changelog entries, transactional emails, ships from AI with light spot-checking.
For regulated industries, human review is not optional. A medical device company translating its instructions for use into German, French, and Italian for EU compliance cannot rely on AI output alone. The EU Medical Device Regulation (MDR) requires translation accuracy to be validated, and "we used a neural model" is not a valid validation method. The same applies to legal filings, financial disclosures, and anything that will be certified. In those cases, AI can draft and reduce the human translator's time by 40–60%, Lionbridge's 2024 post-editing study found that professional translators using AI drafts completed work 47% faster, but a credentialed human must sign off.
A practical decision framework:
- Internal content at any volume: AI only
- Customer-facing marketing, products, support at high volume: AI + human review on top 20% of content
- Legal, medical, certified, or compliance documents: AI draft + full human review and sign-off
- High-value brand copy (slogans, launch campaigns): human only or human-led with AI assist
The founders who are winning on translation right now are not choosing between AI and human. They are using AI to eliminate 80–90% of the translation budget and redirecting the savings toward human review where errors actually cost them money.
If you are building a product that needs multi-language support at scale, the translation pipeline is one of the cleaner problems that AI has genuinely solved. Getting the infrastructure set up correctly from the start, with the right glossary controls and the right human review gates, takes two to three weeks of engineering work. Doing it wrong and retrofitting it later takes much longer.
