What Google Translate can tell us about vibecoding

todsacerdoti | 243 points

> ... a translators’ and interpreters’ work is mostly about ensuring context, navigating ambiguity, and handling cultural sensitivity. This is what Google Translate cannot currently do.

Google Translate can't, but LLMs given enough context can. I've been testing and experimenting with LLMs extensively for translation between Japanese and English for more than two years, and, when properly prompted, they are really good. I say this as someone who worked for twenty years as a freelance translator of Japanese and who still does translation part-time.

Just yesterday, as it happens, I spent the day with Claude Code vibe-coding a multi-LLM system for translating between Japanese and English. You give it a text to be translated, and it asks you questions that it generates on the fly about the purpose of the translation and how you want it translated--literal or free, adapted to the target-language culture or not, with or without footnotes, etc. It then writes a prompt based on your answers, sends the text to models from OpenAI, Anthropic, and Google, creates a combined draft from the three translations, and then sends that draft back to the three models for several rounds of revision, checking, and polishing. I had time to run only a few tests on real texts before going to bed, but the results were really good--better than any model alone when I've tested them, much better than Google Translate, and as good as top-level professional human translation.

The situation is different with interpreting, especially in person. If that were how I made my living, I wouldn't be too worried yet. But for straight translation work where the translator's personality and individual identity aren't emphasized, it's becoming increasingly hard for humans to compete.

tkgally | 18 hours ago

Machine translation is a great example. It's also where I expect AI coding assistants to land. A useful tool, but not some magical thing that is going to completely replace actual professionals. We're at least one more drastic change away from that, and there's no guarantee anyone will find it any time soon. So there's not much sense in worrying about it.

A very similar story has been happening in radiology for the past decade or so. Tech folks think that small scale examples of super accurate AIs mean that radiologists will no longer be needed, but in practice the demand for imaging has grown while people have been scared to join the field. The efficiencies from AI haven't been enough to bridge the gap, resulting in a radiologist _shortage_.

yoden | 3 hours ago

The distinction between what people typically imagine a translator's job is and the reality reminds me of pixar movies being "localized" instead of just translated (green beans on a plate in the Japan release instead of broccoli because that's the food that Japanese kids don't like).

Lacking cultural context while reading translated texts is what made studying history finally interesting to me.

philsnow | 18 hours ago

This article is spot on about a lot of things. One thing I think it fails to address is this:

> I feel confident in asserting that people who say this would not have hired a translator or learned Japanese in a world without Google Translate; they’d have either not gone to Japan at all, or gone anyway and been clueless foreigners as tourists are wont to do.

The correlation here would be something like: the people using AI to build apps previously would simply never have created an app, so it’s not affecting software development as a career as much as you first expect.

It would be like saying AI art won’t affect artists, because the people who would put in such little effort probably would never have commissioned anyone. Which may be a little true (at least in that it reduces the impact).

However, I don’t necessarily know if that’s true for software development. The ability to build software enabled huge business opportunities at very low costs. I think the key difference is this: the people who are now putting in such low effort into commissioning software maybe did hire software engineers before this, and that might throw off a lot of the numbers.

sodality2 | 18 hours ago

I was thinking about this comparison recently along a slightly different axis: One challenge when working with translations (human or machine translations) as that you can't vet whether it's correct or not yourself. So you just have to either trust the translation and hope it's the best. It's a lot easier to trust a person than a machine, though I have had someone message me once to say "this translation of your article is so bad I feel like the translator did not put in a serious attempt"

It's similar to vibe coding where the user truly doesn't know how to create the same thing themselves: You end up with output that you have no way of knowing it's correct, and you just have to blindly trust it and hope for the best. And that just doesn't work for many situations. So you still need expertise anyway (either yours or someone else's)

jezzamon | 14 hours ago

While it's just anecdotal evidence, I have translator friends and work has indeed been drying up over the past decade, and that has only accelerated with the introduction of LLMs. Just check any forum or facebook group for translators, it's all doom and gloom about AI. See this reddit thread, for example: https://www.reddit.com/r/TranslationStudies/comments/173okwg...

While professionals still produce much better quality translations, the demand for everything but the most sensitive work is nearly gone. Would you recommend your offspring get into the industry?

NicuCalcea | 17 hours ago

Some additional things that translators do (which I recall from a professional translator friend, put in my own words):

* Idioms (The article mentions in passing that this isn't so much a difficulty in Norwegian->English, but of course idioms usually don't translate as sentences)

* Cultural references (From arts, history, cuisine, etc. You don't necessarily substitute, but you might have to hint if it has relevant connotations that would be missed.)

* Cultural values (What does "freedom" mean to this one nation, or "passion" to this other, or "resilience" to another, and does that influence translation)

* Matching actor in dubbing (Sometimes the translation you'd use for a line of a dialogue in a book doesn't fit the duration and speaking movements of an actor in a movie, so the translator changes the language to fit better.)

* Artful prose. (AFAICT, LLMs really can't touch this, unless they're directly plagiarizing the right artful bit)

neilv | 16 hours ago

This seems like a terrible comparison since Google Translate is completely beat by DeepL, let alone LLMs. (Google Translate almost surely doesn't use an LLM, or at least not a _large_ one given its speed)

krackers | 18 hours ago

>All this is not to say Google Translate is doing a bad job

Google Translate is doing a bad job.

The Chrome translate function regularly detects Traditional Chinese as Japanese. While many characters are shared, detecting the latter is trivial by comparing unicode code points - Chinese has no kana. The function used to detect this correctly, but it has regressed.

Most irritatingly of all, it doesn't even let you correct its mistakes: as is the rule for all kinds of modern software, the machine thinks it knows best.

devnullbrain | 17 hours ago

> At the dinner table a Norwegian is likely to say something like “Jeg vil ha potetene” (literally “I will have the potatoes”, which sounds presumptuous and haughty in English) where a brit might say “Could I please have some potatoes?”.

I find “I will have the potatoes” to be perfectly fine English and not haughty in the slightest. Is this a difference between British English and American English?

ryao | 15 hours ago

In my limited experience, LLMs can have issues with translation tone — but these issues are pretty easily fixed with good prompting.

I want to believe there will be even more translators in the future. I really want to believe it.

dr_dshiv | 18 hours ago

Fascinating thought piece. While I agree with the thrust of the piece: 'that llms can't really replace engineers', unfortunately the way the industry works is that the excuse of AI, however grounded in reality has been repurposed as a cudgel against actual software industry workers. Sure eventually everyone might figure out that AI can't really write code by itself - and software quality will degrade.. But unfortunately we've long been on the path of enshitification and I fear the trend will only continue. If google's war against its own engineers has resulted in shittier software - and things start break twice a year instead of once - would anyone really blink twice?

darvinyraghpath | 18 hours ago
[deleted]
| 17 hours ago

>> For what it’s worth, I don’t think it’s inconceivable that some future form of AI could handle context and ambiguity as well as humans do, but I do think we’re at least one more AI winter away from that, especially considering that today’s AI moguls seem to have no capacity for nuance, and care more about their tools appearing slick and frictionless than providing responsible output.

Fantastic closing paragraph! Loved the article

camillomiller | 8 hours ago

As long as the person you are talking or writing to is aware that you're not a native speaker, they will understand that you won't be able to follow conventions around polite languages or understand subtle nuances on their part. It's really a non issue. The finer clues of language are intended for people who are from the same culture.

carlosjobim | 18 hours ago

> I see claims from one side that “I used $LLM_SERVICE_PROVIDER to make a small throwaway tool, so all programmers will be unemployed in $ARBITRARY_TIME_WINDOW”, and from the other side flat-out rejections of the idea that this type of tool can have any utility.

No, the one side is saying that all their code is written by LLMs already and that's why they think that. In fact, I would say the other side is the former ("it works for throwaway code but that's it") and that no-one is flat-out rejecting it.

Kiro | 10 hours ago

LLM=Google Translate +Context

banq | 10 hours ago

Prompting and context solves this.

Seanambers | 6 hours ago

I just posted that Google is translating words that should be translated like "bastard" in foreign pop music (in this case, a member of the most popular boy band in the world, BTS) as racial slurs like the n-word. This is pretty much the worst case error scenario for a translation service.

https://kpopping.com/news/2025/Jun/17/Google-Translation-of-...

It's amazing more people aren't talking about this stuff, how incompetent do you have to be to allow racial slurs in translation, especially with all the weights towards being pro-diversity etc that already exist?

Just more enshittification from ol Sundar and the crew

p3rls | an hour ago

Hopefully it tells everyone to never use this douchey term again.

DidYaWipe | 9 hours ago

"Behold the impeccable nuance of my opinion"

autobodie | 11 hours ago

This is like the worst comparison since generative AI is far better at conversational translation than google translate

LLM’s will tell you idioms, slang, and the point behind it

You can take a screenshot of telegram channels for both sides of a war conflict and get all context in a minute

In classic HN fashion I’m sure I missed the point, ok translators are still in demand got it.

Google Translate has been leapfrogged by the same thing that allows for “vibecoding”

yieldcrv | 16 hours ago

The article starts with a giant straw man and miscaracterisation, not sure that I want to read the rest of the article at this point

noname120 | 15 hours ago