Ask HN: Can anybody clarify why OpenAI reasoning now shows non-English thoughts?
People have noticed for a while now that Google's Bard/Gemini has inserted random hindi/bengali words often. [0]
I just caught this in an o3-pro thought process: "and customizing for low difficulty. কাজ করছে!"
That last set of chars is apparently Bengali for "working!".
I just find it curious that similar "errors" are appearing from multiple different models... what is the training method or reasoning that these alternate languages can creep in, does anyone know?
[0] https://www.reddit.com/r/Bard/comments/18zk2tb/bard_speaking_random_languages/
LLM's aren't humans and there's no reason to expect their "thinking"[1] to behave exactly - or even much - like human thinking. In particular, they don't need to "think" in one language. More concretely, in the DeepSeek R1 paper[2] they observed this "thought language mixing" and did some experiments on suppressing it... and the model results got worse. So I wouldn't personally think of it as an "error", but rather as just an artifact of how these things work.
[1]: By this I mean "whatever it is they do that can be thought of as sorta kind roughly analogous to what we generally call thinking." I'm not interested in getting into a debate (here) about the exact nature of thinking and whether or not it's "correct" to refer to LLM's as "thinking". It's a colloquialism that I find useful in this context, nothing more.
[2]: https://arxiv.org/pdf/2501.12948
Multilingual LLMs dont have a clear boundary between languages. They will appear to have one since they maximize likelihoods, so asking something in English will most likely produce an English continuation, etc.
In other circumstances they might take a different path (in terms of output probability decoding) through other character sets, if the probabilities justify this.
This has been really interesting to me. I've been learning Spanish for a while and will mix un poco español en my sentences with ChatGPT all the time, and it's cool to see the same thing reflected back to me. It's not uncommon for a response to be 75% in English with 25% Spanish at the beginnings and ends especially. All of my conversation titles are in Spanish because I always start them with "Hola", so whatever model sets the title just assumes Spanish for it, regardless of what the rest of the message is.
I was on a vacation in the last week and was waiting at a restaurant for a while. The lady behind me was seemingly switching between english and spanish every few sentences which caught my attention. I can only assume the person on the other side was a bilingual also. Someone in their family had a medical emergency and was in the hospital. What's interesting is it seemed like the sentences talking about medical stuff were in very good english, and the spanish sentences were about other things (I can't interpret very fast at all). With the speed and fluency of their conversation it seemed like there was not any cost on their part using either language.
I am bilingual.
My phrases switch to the language I learned them on very easily.
Computer terms are almost always English.
A lot of idioms I learned in my adult life are going to stay English, even if a Turkish equivalent exists and I later learned about them.
I am bilingual as well, my children are trilingual.
I find out that it is way easier for me to translate to or from English (not a native speaker) to any of the languages I am bilingual in, than between these languages. It is very hard for me to listen to one, and speak the other.
I spent a good time in the middle east and loved to listen to my friends arguing in Arabic.
To my French ear or sounded like they were sentencing me to terrible things (and were always surprised they sounded like this :)), up until the random "router" or "framework" which was the core of the fight.
I love to listen to languages I do not understand (a great source is Radio Green) and try to get from the words what they are talking about.
Another one is one of my closest friend, a German, who speaks a very soft English. This until he described me how to drive somewhere (pre-GPS era) and the names he was using were like lashes.
Speaking various languages is a blessing
Language is incredibly interesting to me. Especially when it’s blended or becomes its own pidgin dialect. Multilingual societies are fascinating.
I’m so glad I am not the only one that does this!
I usually interact with LLMs in English. A few weeks ago I made a Gemini gem that tries to consider two opposite sides, moderator included. Somehow it started including bits of Spanish in some of its answers, which I actually don't mind because that's my primary language.
I assumed it knew I speak Spanish from other conversations, my Google profile, geolocation, etc. Maybe my English has enough hints that it was learned by a native Spanish speaker?
There have been a nonzero number of times that asking Gemini about something in Finnish about the demoscene or early 1990s tech has returned much more... colorful answers than what I saw with equivalent questioning in English.
Colorful answers?
I understand that but how often/common could it possibly be to mix in a single bengali word/phrase like that into a larger english one?
Perhaps it's more common in the parts of the world where bengali and english are more commonly spoken in general?
Why so much bengali/hindi then and why not other languages?
There are many users in India training these models. There is also a lot more content out there the models are consuming.
And not to forget, many (most?) Indians are bilingual. Multilingual speakers tend to skip languages within conversation if both parties are fluent -> training material includes those switches.
I have no idea what's going on with ChatGPT, but I can say it's pretty common for multilingual people to be thinking about things in a different language from what they are currently speaking.
Language itself structures how to think about things too. There are some thoughts that are easier to have in one language vs another because the language naturally expresses an idea a particular way that is possible, but less natural to express in another.
Interesting thanks yea I forgot that even I used to be able to think in another language long ago!
I don't actually think this is the case, but nonetheless I think it would be kind of funny if LLMs somehow "discovered" linguistic relativity (https://en.wikipedia.org/wiki/Linguistic_relativity).
This isn’t entirely surprising. Language-model “reasoning” is basically the model internally exploring possibilities in token-space. These models are trained on enormous multilingual datasets and optimized purely for next-token prediction, not language purity. When reasoning traces or scratchpads are revealed directly (as OpenAI occasionally does with o-series models or DeepSeek-R1-zero), it’s common to see models slip into code-switching or even random language fragments, simply because it’s more token-efficient in their latent space.
For example, the DeepSeek team explicitly reported this behavior in their R1-zero paper, noting that purely unsupervised reasoning emerges naturally but brings some “language mixing” along. Interestingly, they found a small supervised fine-tuning (SFT) step with language-consistency rewards slightly improved readability, though it came with trade-offs (DeepSeek blog post).
My guess is OpenAI has typically used a smaller summarizer model to sanitize reasoning outputs before display (they mentioned summarization/filtering briefly at Dev Day), but perhaps lately they’ve started relaxing that step, causing more multilingual slips to leak through. It’d be great to get clarity from them directly on whether this is intentional experimentation or just a side-effect.
[1] DeepSeek-R1 paper that talks about poor readability and language mixing in R1-zero’s raw reasoning https://arxiv.org/abs/2501.12948
[2] OpenAI “Detecting misbehavior in frontier reasoning models” — explains use of a separate CoT “summarizer or sanitizer” before showing traces to end-users https://openai.com/index/chain-of-thought-monitoring/
Models like O3 are rewarded for the final output, not the intermediary thinking steps. So whatever it generates as "thoughts" that gives a better answer gets a higher score.
The DeepSeek-R1 paper has a section on this, where they 'punish' the model if it thinks in a different language to make the thinking tokens more readable. Probably Anthropic does this too.
Others have mentioned that DeepSeek R1 also noticed this “problem”. I believe there are two things going on here.
One, the model is no longer being trained to output likely tokens or tokens likely to satisfy pairwise preferences. So the model doesn’t care. You have to explicitly punish the model for language switching, which dilutes the reasoning reward.
Two, I believe there has been some research on models representing similar ideas in multiple languages in similar areas. Sparse autoencoders have shown this. So if the translated text makes sense, I think this is why. If not, I have no idea.
Multilingual humans do this too. Sometimes a concept is easier to shorthand in one language versus another. It’s somehow “closer”.
If the reasoning didn't need to be exposed to a user, are there any ways in which you get better performance or effect by using the same LLM methods, but using a language better suited to that? (Existing language or bespoke.)
(Inspired by movies and TV shows, when characters switch from English to a different language, such as French or Mandarin, to better express something. Maybe there's a compound word in German for that.)
Languages are thought encodings.
Most people can only encode/decode a single language but an LLM can move between them fluidly.
I do some AI training as a side gig and there has been a few recent updates on code-switching (i.e. speaking two languages at the same time) in the last few months. It's possible that these changes may have caused such behavior recently.
It would be interesting to study when this type of behavior emerges to see what the patterns are. It could give insights into language or culture specific reasoning patterns and subjects that are easier to convey in one language or another. Is it easier to understand math word problems in XXX or YYY? What about relationships?
Definitely curious what circuits light-up from a Neuralese perspective. We want reasoning traces that are both faithful to the thought process and also interpretable. If the other language segments are lighting up meanings much different than their translations, that would raise questions for me.
I've seen also russian and chinese which i certainly have never speaked to it nor understand
I remember watching video mentioning it (https://www.youtube.com/shorts/Vv5Ia6C5vYk)
The main suspicion is that it's more compact?
Reminds me of the son of a friend of mine, who was raised bilingually (English and French). When he was 3, he would sometimes ask "is this English, or the other language?"
Multilingual humans do this too, so not surprising that AI does this.
In fact monolingual humans have quite a limited understanding of the world.
I know plenty of bilingual people that have a very limited understanding of the world, and conversely monolinguists that have a very broad view.
One could even say assuming someone's level of worldly understanding based on how many languages they speak shows a fairly limited world view.
As I speaker of five languages, all but one fluently: why does my understanding of the world magically increase when I learn a new noun so say "sparrow" in the fifth that I'm learning?
Is it linear (25% more understanding for the fifth) or asymptotically? Does it increase across all domains equally (geology, poetry, ethics) or asymmetrically?
Seriously, explain it to me?
No such thing as a monolingual human. Any language can be broken down to subsets that are associated with different ways of thinking. Another thing is globalization and culture export.
I see this as a problem. You can't make an LLM "unlearn" something; once it's in there, it's in there. If I have a huge database, I can easily delete swathes of useless data, but I cannot do the same with an LLM. It's not a living, thinking being - it's a program running on a computer; a device that we, in other circumstances, can add information to or remove it from. We can suppress certain things, but that information is still in there, taking up space and can still possibly be accessed.
We are intentionally undoing one of the things that makes computers useful.