Wednesday, March 19, 2025

Slator 2025 Localization Buyer Survey

Slator’s 30-page 2025 Localization Buyer Survey provides key insights from enterprise buyers of translation and localization services. The survey of 50 enterprise buyers uncovers the challenges buyers expect to face in 2025 and assesses the extent to which they have extracted value from language AI to date.

Slator 2025 Localization Buyer Survey

The findings offer valuable insights for both buyers and providers. For enterprise buyers, the report serves as a benchmarking tool — buyers can compare their strategies on budgets, AI adoption, and operational challenges with industry peers.

For language service providers (LSPs) and translation management system (TMS) providers, the results provide direct insights into where buyers perceive gaps in support. The survey responses shed light on buyer expectations and priorities, helping inform service development, technology investment, and client engagement strategies.

Respondents represented a diverse range of industries, offering a broad perspective on localization strategies, operational hurdles, and technology-driven shifts.

AI’s impact on localization budgets was a key theme. The survey explored how buyers expect AI-driven efficiencies to influence spending — whether cost savings will reduce budget, be reinvested, or reallocated. The survey also assessed how successfully buyers have integrated AI to date, tangible value realized, and common barriers to realizing AI value.

The survey looks at changes in per unit base rates for translation over the past year and expectations for pricing in the next 12 months. These insights offer a view of how pricing strategies may evolve, allowing LSPs to anticipate pricing pressures and align with buyer expectations.

Operational challenges and inefficiencies were another key focus. Buyers identified their greatest cost inefficiencies and barriers to achieving Localization Buyer Survey goals. Open-text responses were categorized, revealing 10 common inefficiencies and five key challenges for 2025 — highlighting persistent hurdles from technological integration to internal complexity.

The survey also explored buyer expectations for AI implementation, particularly the role of LSPs. Buyers indicated whether they seek strategic guidance, hands-on support, or fully integrated AI solutions. The results show that most buyers view LSPs as strategic partners, expecting proactive support to scale AI solutions and optimize workflows.

Finally, the survey looked into buyer expectations for AI integration within the TMS. The results show that almost all buyers now see AI capabilities as a baseline requirement, with many expecting customizable AI workflows and integrated machine translation. 

Ultimately, these insights aim to help stakeholders navigate the changing localization landscape in 2025 — enabling buyers to benchmark strategies, optimize workflows, and address operational challenges, while supporting LSPs and TMS providers in adapting services, aligning with buyer expectations, and responding effectively to emerging industry shifts.

Monday, March 17, 2025

Android’s Instant Translation Gets Enhanced in ‘Circle to Search’

An article on the Android Authority site from March 2025 states that the instant translation function could soon be available as a permanent feature within “Circle to Search.”

Android’s Instant Translation Gets Enhanced in ‘Circle to Search’

If enabled, users will be able to translate whatever they select on the screen using Circle to Search on the screen into a device’s supported languages (this varies by device brand and model) without navigating out of the search function.

Back in March 2024, Google announced the rollout of Language Translation Technology in the “Circle to Search” function for Android, which allows users to circle or tap text or an object on their mobile device screen and automatically search for it without switching apps. 

So far, users have been able to translate things like menus on photos or PDFs by long-pressing the home button or navigation bar, and then tapping the translate icon. 

Google’s Android XR, the operating system introduced in December 2024 and enhanced by Gemini 2.0, is slated to support instant translation technology as a regular feature on more than mobile devices. The function can be expected on watches, glasses, TVs, cars, and more, along with other AI-powered features like directions and message summaries.

At the time of writing, Google states that the Translation Management Systems feature is available in over 100 languages on its own Pixel smartphone. “If you want to chat with your cousin in Spain or translate a menu at a restaurant online, Circle to Search can help, no matter where the text appears – in a PDF, video, web page, or chat,” says the marketing copy in the Google Store. 

For More Latest Updates : https://slator.com/

Tuesday, March 11, 2025

New Research Explores How to Boost Large Language Models’ Multilingual Performance

In a February 20, 2025 paper, researchers Danni Liu and Jan Niehues from the Karlsruhe Institute of Technology proposed a way to improve how large language models (LLMs) perform across different languages.

New Research Explores How to Boost Large Language Models’ Multilingual Performance

They explained that LLMs like Llama 3 and Qwen 2.5, show strong performance in tasks like machine translation (MT) but often struggle with low-resource languages due to limited available data. Current fine-tuning processes do not effectively bridge the performance gaps across diverse languages, making it difficult for models to generalize effectively beyond high-resource settings.

The researchers focus on leveraging the middle layers of LLMs to enable better cross-lingual transfer across multiple tasks, including MT.

LLMs consist of multiple layers. The early (or bottom) layers handle basic patterns like individual words, while the final (or top) layers focus on producing a response. The middle layers play a key role in capturing the deeper meaning of sentences and how different words relate to each other.

Liu and Niehues found that these middle layers “exhibit the strongest potential for cross-lingual alignment,” meaning they help ensure that words and phrases with similar meanings are represented in a comparable way across languages. Strengthening this alignment helps the model transfer knowledge between languages more effectively.

By extracting embeddings (i.e., representations of text in vector form) from the model’s middle layers and adjusting them so that equivalent concepts are closer together across languages, the researchers aim to improve the model’s ability to understand and generate text in multiple languages.

Alternating Training Strategy

Rather than relying solely on task-specific fine-tuning, they introduce an “alternating training strategy” that switches between task-specific fine-tuning (e.g., for translation) and alignment training. Specifically, an additional step — middle-layer alignment — is integrated into the fine-tuning process to ensure that the representations learned in one language are more transferable to others.

Tests showed that this method improved both translation accuracy and performance across both high-resource and low-resource languages. Liu and Niehues noted that the models were also able to generalize their performance to languages not included in the initial alignment training.

One significant advantage of this method is its modular nature: “task-specific and alignment modules trained separately can be combined post-hoc to improve transfer performance” without requiring full model retraining. This makes it possible to improve existing models with enhanced multilingual capabilities while avoiding the high computational costs of retraining from scratch.

Additionally, this approach is faster and more cost-effective since “a few hundreds of parallel sentences as alignment data are sufficient.”

The researchers have made the code available on GitHub, allowing others to implement and test their approach.

Wednesday, March 5, 2025

CEOs React as Trump Declares English the Sole Official Language of the US

In response to President Trump’s executive order designating English as the official language of the US, SlatorPod gathered Dipak Patel, CEO of GLOBO, and Peter Argondizzo, CEO of Argo Translation, to discuss its implications for the US language industry.

The discussion highlighted that language access has long been a key part of US policy, particularly in healthcare, education, and legal services. Dipak pointed out that eliminating language services would create inefficiencies, making it harder for medical professionals to provide accurate care.

CEOs React as Trump Declares English the Sole Official Language of the US

Peter emphasized the broader uncertainty the order creates as many organizations rely on federal funding for language services, and a lack of clear guidance could lead to reduced support in schools, courts, and public services.

Both CEOs acknowledged that while this order presents challenges, the language services industry has historically adapted to change. Dipak suggested that financial pressures may push the industry to innovate, potentially accelerating AI adoption in interpreting.

CEOs React as Trump Declares English the Sole Official Language of the US

While the long-term impact remains unclear, the consensus is that language access will persist — driven by business needs and market demand.

Monday, March 3, 2025

Trump Makes English the Only Official Language of the US, Revokes Clinton Language Access Order

A new Executive Order published on March 1, 2025, by the US White House designates English as the only official language of the United States and revokes Executive Order 13166, “Improving Access to Services for Persons with Limited English Proficiency” (LEP), signed in 2000 during the Clinton administration.

Trump Makes English the Only Official Language of the US, Revokes Clinton Language Access Order

The new order states that it is in the country’s best interest for the federal government to designate only one official language. It also argues that “Establishing English as the official language will not only streamline communication but also reinforce shared national values, and create a more cohesive and efficient society.”

The order also specifies that agency heads, defined as “the highest-ranking official of an agency,” are “not required to amend, remove, or otherwise stop production of documents, products, or other services prepared or offered in languages other than English.”

While the text states that this Executive Order does not create new legal rights or benefits, and that it should be implemented subject to existing laws, the Trump administration has instructed the Attorney General to update policy guidance in line with the new official language designation.

A Shift in Language Access Policy?

Revoked order 13166 required federal agencies to examine their services and identify language assistance needs for LEP populations, mandating the development and implementation of systems to meet those needs.

Language access guidelines under 13166 aligned with Title VI of the Civil Rights Act of 1964, a federal law that prohibits discrimination on the basis of multiple criteria, including national origin (and by association, language). It expressly mandated recipients of federal financial assistance, such as healthcare providers and educational institutions, to offer language assistance to LEP applicants and beneficiaries.

In 2003, during George W. Bush’s administration, the LEP.gov website was launched to help federal agencies, advocates, and individuals access related information and services. And for a couple of decades, including during the previous Trump term (2017–2021), the US Department of Justice (DOJ) provided policy guidance to assist federal agencies for compliance.

Still standing, and not mentioned in the latest Executive Order, is the one formulated during the Biden administration: Executive Order 14091 (2023), “Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government.”

Executive Order 14091 instructs agencies to consider opportunities to “improve accessibility for people with disabilities and improve language access services to ensure that all communities can engage with agencies’ respective civil rights offices.”

At the time of writing, both the LEP.gov site and the US Commission on Civil Rights’ Limited English Proficiency Plan remain online.

Wednesday, February 12, 2025

Researchers Present DOLFIN, a New Test Set for AI Translation for Financial Content

On February 5, 2025, a team of researchers from Grenoble Alpes University and Lingua Custodia, a France-based company specializing in AI and natural language processing (NLP) for the finance sector, introduced DOLFIN, a new test set designed to evaluate document-level machine translation (MT) in the financial domain.


The researchers say that the financial domain presents unique challenges for MT due to its reliance on precise terminology and strict formatting rules. They describe it as “an interesting use-case for MT” since key terms often shift meaning depending on context.

For example, the French word couverture means blanket in a general setting but hedge in financial texts. Such nuances are difficult to capture without larger translation units.

Despite strong research interest in document-level MT, specialized test sets remain scarce, the researchers note. Most datasets focus on general topics rather than domains such as legal and financial translation.

Given that many financial documents “contain an explicit definition of terms used for the mentioned entities that must be respected throughout the document,” they argue that document-level evaluation is essential. 

DOLFIN allows researchers to assess how well MT models translate longer texts while maintaining context. 

Unlike traditional test sets that rely on sentence-level alignment, DOLFIN structures data into aligned sections, enabling the evaluation of broader linguistic challenges, such as information reorganization, terminology consistency, and formatting accuracy.

Context-Sensitive

To build the dataset, they sourced parallel documents from Fundinfo, a provider of investment fund data, and extracted and aligned financial sections rather than individual sentences. The dataset covers English-French, English-German, English-Spanish, English-Italian, and French-Spanish, with an average of 1,950 segments per language pair. 

The goal, according to the researchers, was to develop “a test set rich in context-sensitive phenomena to challenge MT models.”

To assess the usefulness of DOLFIN, the researchers evaluated large language models (LLMs) including GPT-4o, Llama-3-70b, and their smaller counterparts. They tested these models in two settings: translating sentence by sentence versus translating full document sections. 

They found that DOLFIN effectively distinguishes between context-aware and context-agnostic models, while also exposing model weaknesses in financial translation.

Larger models benefited from more context, producing more accurate and consistent translations, while smaller models often struggled. “For some segments, the generation enters a downhill, and with every token, the model’s predictions get worse,” the researchers observed, describing how smaller LLMs failed to maintain coherence over longer passages.

DOLFIN also reveals persistent weaknesses in financial MT, particularly in formatting and terminology consistency. Many models failed to properly localize currency formats, defaulting to English-style notation instead of adapting to European conventions.

The dataset is publicly available on Hugging Face.

Authors: Mariam Nakhlé, Marco Dinarelli, Raheel Qader, Emmanuelle Esperança-Rodier, and Hervé Blanchon

Monday, February 10, 2025

Off-Screen Drama Pits AI Dubbing Against French Voice Actors

How do US actor Sylvester Stallone, France’s minister for gender equality Aurore Bergé, and a multilingual/multibillion voice AI company collide in a tense drama? Since early January 2024, multiple online media sources have highlighted a clash that began with news that “Armor,” a film starring Stallone, would feature AI dubbing.

For 50 years, Alain Dorval was the familiar voice of Stallone in French-dubbed films, but he passed away in February 2024. Minister Bergé happens to be Dorval’s daughter. Enter ElevenLabs, which in January 2024 reached a USD 3bn valuation, and found itself at the center of a weeks-long controversy over the cloning of Dorval’s voice.

Bergé publicly opposed (article in French) the use of her father’s digitally recreated voice, despite acknowledging a prior agreement to a test. “It was just a trial run, with an agreement strictly guaranteeing that my mother and I would have final approval before any use or publication. And that nothing could be done without our consent.”

According to Variety, which has followed the story since the partnership around “Armor” between Lumiere Ventures and ElevenLabs came to light, Bergé’s move galvanized the French actors’ guild (FIA, in French). 

FIA’s representative, Jimmy Shuman, called the voice cloning attempt a “provocation” in the Variety article. That is because the union is in the midst of “negotiating agreements on limits for artificial intelligence and dubbing.”

The controversy over Stallone’s French voice underscores the potential for AI to displace voice actors, often celebrities in their own right across Europe.

ElevenLabs CEO, Mati Staniszewski, told Variety that “Recreating Alain Dorval’s voice is a chance to show how technology can honor tradition while creating new possibilities in film production.” 

Like their US counterparts after a few notable actions, voice-over artists in several European countries are taking a proactive stance through their unions, including AI clauses in their contracts to restrict AI voice use to specific projects or outright banning work for studios that do not offer adequate protections.

Per the latest Variety article on the subject, voice actor Michel Vigné will be the voice of Stallone for the French release. According to IMDB, Vigné has already voiced Stallone in French in the past.

The larger issue remains: the film industry acknowledges that AI voice cloning technology is rapidly advancing and the drama around Armor’s French dubs serves as a symbol of things to come in Europe and beyond.

One decision that perhaps many voice actors will need to grapple with is whether they want their voice to be immortalized with AI or simply be replaced by it or by another actor.

Dealmaker Stuns with $10M Donation for Translation Education

Florian and Esther discuss the language industry news of the week, breaking down Slator’s  2025 Language Service Provider Index  (LSPI), whi...