Top News

Google Introduces TranslateGemma, a new suite of open translation models built on Gemma 3
ET Online | January 16, 2026 10:38 PM CST

Synopsis

Google has unveiled TranslateGemma, a new set of open translation models. These models are built on Gemma 3 and aim to support communication in many languages across various devices. They offer high-quality translation with lower computational needs.

TranslateGemma is available in 4B, 12B, and 27B parameter variants
Google has announced TranslateGemma, a new family of open translation models designed to support multilingual communication across a wide range of devices and deployment environments. Built on Gemma 3, the new models aim to deliver high-quality machine translation while maintaining lower computational requirements.

TranslateGemma is available in 4B, 12B, and 27B parameter variants and currently supports translation across 55 languages, covering both high-resource and low-resource language pairs. According to Google, the models are designed to balance efficiency and accuracy, making them suitable for use cases ranging from mobile and edge devices to cloud-based deployments.

In internal evaluations using the WMT24++ benchmark, Google reports that the 12B TranslateGemma model outperforms the larger Gemma 3 27B baseline in translation quality metrics, despite using fewer parameters. The smaller 4B model is positioned for on-device and mobile inference, offering competitive performance at lower latency.


TranslateGemma models are trained using a two-stage process. This includes supervised fine-tuning on a mix of human-translated and synthetic datasets, followed by reinforcement learning guided by quality estimation metrics. Google states that this approach improves contextual accuracy and translation fluency across languages.

Beyond text translation, TranslateGemma retains multimodal capabilities inherited from Gemma 3. Early testing indicates improved performance in translating text embedded within images, even without additional multimodal-specific training.
The models are designed to run across different environments:

4B for mobile and edge devices
12B for consumer laptops and local development
27B for cloud deployment on a single GPU or TPU

TranslateGemma is available as an open model via platforms including Kaggle, Hugging Face, and Google Cloud’s Vertex AI, with a detailed technical report published on arXiv. Google says the release is intended to support researchers and developers working on multilingual applications, custom translation workflows, and low-resource language support.


READ NEXT
Cancel OK