Back to Blog
AIInternationalizationWeb Development

Your LLM Speaks Dozens of Languages. You Just Never Asked.

Code435 min read
THE MISSING INSTRUCTIONBEFOREUseres-ES browserHTTP RequestAccept-Language not readAI PromptNo language instructionLLMResponseEnglish only (always)AFTERUseres-ES browserHTTP RequestAccept-Language: es-ES+3 LINESAI PromptRespond in: es-ESLLMResponseLocalized (any language)One helper function. Zero translation files. Works for any language the model supports.
Before the fix, the Accept-Language header was ignored and the LLM defaulted to English every time. Three appended lines change that.

What Analytics Told Us

We started noticing a pattern in analytics: a meaningful share of visitors were coming from non-English-speaking countries. Mexico, Brazil, Spain, Germany, France. Not a small rounding error — enough to matter.

The instinct was immediate: we need internationalization. Translation files, locale routing, language detection middleware, a whole subsystem dedicated to presenting the right language to the right user. That work is real, and eventually worth doing.

But before jumping to the solution, we asked the simpler question: what is actually broken for these users right now?

What the Browser Already Handles

Most browsers handle static UI translation without any help from us. Chrome's built-in translation prompt appears within seconds of loading a page in a language the user doesn't prefer. For navigation labels, button copy, headings, form placeholders, the browser covers a lot of ground. It's not perfect, but it's functional.

The gap showed up somewhere more specific: dynamic AI-generated content. Suggestions, form analysis responses, content recommendations, anything the LLM produced on the fly. These arrive as pre-rendered strings injected into the DOM after the page loads. The browser never sees them coming. Chrome's translation engine doesn't intercept them. They were landing in English, every time, regardless of where the user was or what language they preferred.

That was the actual problem.

The Model Was Waiting for Permission

We were already using an LLM for several features: content suggestions, form field analysis, search result reranking. None of them were responding in anything other than English.

Here is the thing: the model does not default to English because it only knows English. It defaults to English because we never told it not to. Modern LLMs are genuinely multilingual. They can produce fluent output in dozens of languages. The model was capable of responding in Spanish or Portuguese or French the entire time. It was just waiting for an instruction that never came.

Meanwhile, the user's language preference was sitting right there in every request. The Accept-Language header is sent automatically by every browser. A user in Spain with a Spanish-language Chrome installation sends Accept-Language: es-ES,es;q=0.9,en;q=0.8 with every single request. We were reading other headers already. We just were not routing this one to the AI.

The Fix

One helper function:

function getLanguageInstruction(acceptLanguage: string | null): string {
  if (!acceptLanguage) return "";
  const locale = acceptLanguage.split(",")[0].trim();
  if (locale.startsWith("en")) return "";
  return `Respond in the user's language: ${locale}.`;
}

And three lines appended to each AI prompt: read the header, call the function, append the result. If the user is on an English browser, the function returns an empty string and nothing changes. If the user is on an es-ES browser, the prompt ends with “Respond in the user's language: es-ES.” The model reads that instruction and applies it.

No translation files. No locale routing. No external service. No configuration. Works for any language the model supports, which at this point is most of them.

What This Does Not Solve

The Accept-Language header reflects browser and OS settings, not necessarily the user's intent. A Spanish speaker who installed their browser in English, or who uses a work machine configured in English, will not benefit. The header says “my browser prefers English” and the prompt behaves accordingly.

This also does not touch anything outside the AI content layer. Static UI copy, email templates, Open Graph metadata, hardcoded error messages: all still English. For a user who is genuinely struggling with the interface, this is a partial improvement. It makes the AI-generated parts of the experience feel more natural. It does not change the navigation, the headings, or the calls to action.

Full i18n is still the right answer for a product that wants to serve non-English speakers well. This addresses a specific gap without touching everything else. That is precisely what makes it useful as a first step.

Practical Takeaways

  • Ask what is actually broken before building a full solution. The instinct to reach for the complete, correct answer is understandable. But “what is specifically failing for these users?” sometimes leads somewhere much smaller and faster to ship.
  • LLMs are multilingual by default. You do not need translation infrastructure for AI-generated content. You need an instruction. The model already knows how to respond in the user's language; it just needs to be told to.
  • Accept-Language is already there. Every browser sends it. Every server-side framework can read it. It is not a perfect signal for user intent, but it is a good one and costs nothing to use.
  • Browser translation handles static text. The real gap is dynamic content injected after load. That is where this kind of fix earns its keep.

Want to understand the other places AI earns its keep on a production site? Adding AI to Your Website (Without the Hype) covers where it genuinely helps and where it doesn't. And if you're thinking about performance for international users, start with how CDNs keep your site fast globally. Language means nothing if the page takes five seconds to load.

Need help with your infrastructure?

Whether it's DNS, deployment, or full-stack architecture — Code43 can help you get it right.

Book a Consultation