Slator.com is leading source of analysis and research for the global translation, localization, and language technology industry. We host SlatorCon, the language industry’s foremost executive conference. We publish SlatorPod, the weekly language industry podcast.
Florian and Esther discuss the language industry news of the week, breaking down Slator’s 2025 Language Service Provider Index (LSPI), which features nearly 300 LSPs and reports 6.6% combined growth in 2024 revenues, totaling USD 8.4bn.
Florian touches on a surprise USD 10m donation from private equity executive Mario Giannini to launch a new MA translation and interpreting program at California State University, Long Beach. The duo talks about McKinsey’s State of AI report, which continues to classify translators as AI-related roles and shows that hiring them has become slightly easier.
In Esther’s M&A corner, TransPerfect announced two acquisitions, Technicolor Games and Blu Digital Group, further expanding its presence in gaming and media localization. In Israel, BlueLion and GATS merged to form TransNarrative, and Brazilian providers Korn Translations and Zaum Langs joined forces under the Idlewild Burg group.
Meanwhile, in funding, Teleperformance invested USD 13m in Sanas, a startup offering real-time accent translation for call centers to improve global communication. Lingo.dev raised USD 4.2m, while Dubformer secured USD 3.6m to develop the ‘Photoshop of AI dubbing’.
Florian shares insights from Slator’s 2025 Localization Buyer Survey, which found that over half of buyers want strategic AI support from vendors and many cite inefficient automation as a key challenge.
Amid growing uncertainty about translation careers due to AI advancements and sensationalized headlines, one California university is celebrating a transformative donation.
On March 11, 2025, California State University, Long Beach (CSULB) announced that Mario Giannini, Executive Co-Chairman of private equity firm Hamilton Lane, has gifted $10 million to establish a Master of Arts program in Translation and Interpreting. The program is set to launch in the fall of 2026.
This isn’t Giannini’s first significant contribution to CSULB. In January 2017, he donated $1.75 million to establish The Clorinda Donato Center for Global Romance Languages and Translation Studies, named after its current director.
CSULB Center Expands Translation Studies with Additional $5.25M Gift
Housed within CSULB’s College of Liberal Arts, The Clorinda Donato Center for Global Romance Languages and Translation Studies serves as a hub for both pedagogical research and instruction in Romance languages.
Students can pursue a minor or graduate certificate in translation studies, while a major in translation is available through collaboration with the Department of Linguistics. The Center has also hosted internship programs, with growing demand from students across disciplines—ranging from speech-language therapy and the arts to politics—seeking on-campus experience.
In 2022, Giannini contributed an additional $5.25 million to The Center, marking the largest gift in the history of CSULB’s College of Liberal Arts. A write-up at the time highlighted The Center’s uniqueness, stating, “The Center is unique in the State of California, offering world-class training in translation studies at state university prices.”
Born in France to Italian-speaking parents, Giannini graduated from Cal State Northridge in 1973 with a BA in English and has credited the CSU system as “a huge influence” on his career. He currently serves as CEO of Hamilton Lane and sits on the firm’s investment committees.
According to Director Clorinda Donato, who also teaches Italian and French, most of the new funds will be used to create scholarships for students admitted to the master’s program.
CSULB to Revive Translation Program with Advanced AI Integration
CSULB’s original translation and interpreting program, founded in the 1980s by renowned legal interpreter Alexander Rainof, was retired when he stepped down. CSULB later proposed reviving the program and presented the idea to Mario Giannini, who was chosen for his connection as a CSU alumnus.
The current undergraduate program, along with the new two-track MA program, aims to train students in diverse areas such as audiovisual, community, educational, legal, literary, and medical translation and interpreting. According to Director Clorinda Donato, the curriculum will take an applied approach by integrating advanced AI, large language models (LLMs), and data science.
“We will proudly offer generous funding to each year’s cohort of prospective students to help reduce the costs of their graduate education,” Donato said. Annual tuition is $18,972 for undergraduates and $17,922 for graduate students, excluding room and board.
A Pinch, a Twitch, and Everything in Between: Pinch’s Christian Safka and Twitch’s Susan Maria Howard were among the top language industry leaders who joined hundreds of attendees on March 18, 2025, for the first SlatorCon Remote conference of the year.
Kicking off the day’s events, Slator’s Head of Advisory, Esther Bond, welcomed attendees and invited Managing Director Florian Faes to share the latest findings and insights in his highly anticipated 'industry health check.
In his presentation, Faes began by reflecting on the challenges of 2024. He discussed data from Slator’s 2025 Language Service Provider Index (LSPI) and highlighted the growth of interpreting-focused companies, contrasted with the struggles faced by small, undifferentiated agencies and the rapid rise of language AI, driven by companies like ElevenLabs and DeepL.
Faes also highlighted key findings from Slator’s 2025 Localization Buyer Survey, including the challenges buyers face in implementing AI and the growing need for AI partners to address inefficiencies. He also noted the mixed outlook for the industry in the year ahead.
LLMs Are Just the Beginning
The first expert presentation was delivered by Sara Papi, a Postdoctoral Researcher at the Fondazione Bruno Kessler, who discussed the current state of research in simultaneous speech-to-text translation.
Papi highlighted discrepancies between the original definition and current practices in the speech translation field, identified through a review of expert literature. She specifically pointed out issues related to the use of pre-segmented speech and inconsistencies in terminology.
Slator’s Head of Research, Anna Wyndham, moderated the first panel of the day, featuring Simone Bohnenberger-Rich, Chief Product Officer at Phrase; Simon Koranter, Head of Global Production & Engineering at Compass Languages; and Matteo Nonne, Localization Program Manager at On.
The panelists discussed the evolving role of generative AI in localization, highlighting its shift from initial experimentation to scalable solutions that drive growth. They shared insights on how AI is transforming localization from a cost center into a strategic function by enabling customized, context-aware content adaptation and addressing challenges related to return on investment (ROI) and stakeholder expectations.
Slator’s Alex Edwards, Senior Research Analyst, moderated another panel discussion focused on the adoption of large language models (LLMs) for AI translation in enterprise workflows. Panelists Manuel Herranz, CEO of Pangeanic, and Bruno Bitter, CEO of Blackbird.io, explored whether LLMs truly represent the state of the art.
Herranz and Bitter emphasized that middleware and techniques like Retrieval-Augmented Generation (RAG) are more advanced, and highlighted the importance of fine-tuning smaller, domain-specific models. They also discussed the role of orchestration technology in effectively managing a range of AI tools.
In his presentation, Supertext’s CEO Samuel Läubli echoed insights shared by other speakers, emphasizing that LLMs generate fluent texts by considering broader context. He explored the implications of an AI-first era for translation, the rise of smaller competitive players, and the continued importance of human expertise.
Läubli highlighted that the new Supertext resulted from a 2024 merger between LSP Supertext and AI translation company Textshuttle. He remarked, “I’ve been working in this field for 10 years now, but I haven’t seen a system or AI agent that can guarantee a correct translation — and I’m quite sure I won’t see it in the next 10 years.”
Teresa Toronjo, Localization Manager at Malt, discussed collaboration within leaner localization teams, stressing the importance of diverse partnerships, scalable processes, and maintaining quality consistency with cost-effectiveness guided by experts.
If you missed SlatorCon Remote March 2025 in real-time, recordings will be available soon through our Pro and Enterprise plans.
An article on the Android Authority site from March 2025 states that the instant translation function could soon be available as a permanent feature within “Circle to Search.”
Android’s Instant Translation Gets Enhanced in ‘Circle to Search’
If enabled, users will be able to translate whatever they select on the screen using Circle to Search on the screen into a device’s supported languages (this varies by device brand and model) without navigating out of the search function.
Back in March 2024, Google announced the rollout of Language Translation Technology in the “Circle to Search” function for Android, which allows users to circle or tap text or an object on their mobile device screen and automatically search for it without switching apps.
So far, users have been able to translate things like menus on photos or PDFs by long-pressing the home button or navigation bar, and then tapping the translate icon.
Google’s Android XR, the operating system introduced in December 2024 and enhanced by Gemini 2.0, is slated to support instant translation technology as a regular feature on more than mobile devices. The function can be expected on watches, glasses, TVs, cars, and more, along with other AI-powered features like directions and message summaries.
At the time of writing, Google states that the Translation Management Systems feature is available in over 100 languages on its own Pixel smartphone. “If you want to chat with your cousin in Spain or translate a menu at a restaurant online, Circle to Search can help, no matter where the text appears – in a PDF, video, web page, or chat,” says the marketing copy in the Google Store.
In a February 20, 2025paper, researchers Danni Liu and Jan Niehues from the Karlsruhe Institute of Technology proposed a way to improve how large language models (LLMs) perform across different languages.
New Research Explores How to Boost Large Language Models’ Multilingual Performance
They explained that LLMs like Llama 3 and Qwen 2.5, show strong performance in tasks like machine translation (MT) but often struggle with low-resource languages due to limited available data. Current fine-tuning processes do not effectively bridge the performance gaps across diverse languages, making it difficult for models to generalize effectively beyond high-resource settings.
The researchers focus on leveraging the middle layers of LLMs to enable better cross-lingual transfer across multiple tasks, including MT.
LLMs consist of multiple layers. The early (or bottom) layers handle basic patterns like individual words, while the final (or top) layers focus on producing a response. The middle layers play a key role in capturing the deeper meaning of sentences and how different words relate to each other.
Liu and Niehues found that these middle layers “exhibit the strongest potential for cross-lingual alignment,” meaning they help ensure that words and phrases with similar meanings are represented in a comparable way across languages. Strengthening this alignment helps the model transfer knowledge between languages more effectively.
By extracting embeddings (i.e., representations of text in vector form) from the model’s middle layers and adjusting them so that equivalent concepts are closer together across languages, the researchers aim to improve the model’s ability to understand and generate text in multiple languages.
Alternating Training Strategy
Rather than relying solely on task-specific fine-tuning, they introduce an “alternating training strategy” that switches between task-specific fine-tuning (e.g., for translation) and alignment training. Specifically, an additional step — middle-layer alignment — is integrated into the fine-tuning process to ensure that the representations learned in one language are more transferable to others.
Tests showed that this method improved both translation accuracy and performance across both high-resource and low-resource languages. Liu and Niehues noted that the models were also able to generalize their performance to languages not included in the initial alignment training.
One significant advantage of this method is its modular nature: “task-specific and alignment modules trained separately can be combined post-hoc to improve transfer performance” without requiring full model retraining. This makes it possible to improve existing models with enhanced multilingual capabilities while avoiding the high computational costs of retraining from scratch.
Additionally, this approach is faster and more cost-effective since “a few hundreds of parallel sentences as alignment data are sufficient.”
The researchers have made the code available on GitHub, allowing others to implement and test their approach.
The discussion highlighted that language access has long been a key part of US policy, particularly in healthcare, education, and legal services. Dipak pointed out that eliminating language services would create inefficiencies, making it harder for medical professionals to provide accurate care.
CEOs React as Trump Declares English the Sole Official Language of the US
Peter emphasized the broader uncertainty the order creates as many organizations rely on federal funding for language services, and a lack of clear guidance could lead to reduced support in schools, courts, and public services.
Both CEOs acknowledged that while this order presents challenges, the language services industry has historically adapted to change. Dipak suggested that financial pressures may push the industry to innovate, potentially accelerating AI adoption in interpreting.
CEOs React as Trump Declares English the Sole Official Language of the US
While the long-term impact remains unclear, the consensus is that language access will persist — driven by business needs and market demand.
A new Executive Order published on March 1, 2025, by the US White House designates English as the only official language of the United States and revokes Executive Order 13166, “Improving Access to Services for Persons with Limited English Proficiency” (LEP), signed in 2000 during the Clinton administration.
Trump Makes English the Only Official Language of the US, Revokes Clinton Language Access Order
The new order states that it is in the country’s best interest for the federal government to designate only one official language. It also argues that “Establishing English as the official language will not only streamline communication but also reinforce shared national values, and create a more cohesive and efficient society.”
The order also specifies that agency heads, defined as “the highest-ranking official of an agency,” are “not required to amend, remove, or otherwise stop production of documents, products, or other services prepared or offered in languages other than English.”
While the text states that this Executive Order does not create new legal rights or benefits, and that it should be implemented subject to existing laws, the Trump administration has instructed the Attorney General to update policy guidance in line with the new official language designation.
A Shift in Language Access Policy?
Revoked order 13166 required federal agencies to examine their services and identify language assistance needs for LEP populations, mandating the development and implementation of systems to meet those needs.
Language access guidelines under 13166 aligned with Title VI of the Civil Rights Act of 1964, a federal law that prohibits discrimination on the basis of multiple criteria, including national origin (and by association, language). It expressly mandated recipients of federal financial assistance, such as healthcare providers and educational institutions, to offer language assistance to LEP applicants and beneficiaries.
In 2003, during George W. Bush’s administration, the LEP.gov website was launched to help federal agencies, advocates, and individuals access related information and services. And for a couple of decades, including during the previous Trump term (2017–2021), the US Department of Justice (DOJ) provided policy guidance to assist federal agencies for compliance.
Still standing, and not mentioned in the latest Executive Order, is the one formulated during the Biden administration: Executive Order 14091 (2023), “Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government.”
Executive Order 14091 instructs agencies to consider opportunities to “improve accessibility for people with disabilities and improve language access services to ensure that all communities can engage with agencies’ respective civil rights offices.”
At the time of writing, both the LEP.gov site and the US Commission on Civil Rights’ Limited English Proficiency Plan remain online.
How do US actor Sylvester Stallone, France’s minister for gender equality Aurore Bergé, and a multilingual/multibillion voice AI company collide in a tense drama? Since early January 2024, multiple online media sources have highlighted a clash that began withnewsthat “Armor,” a film starring Stallone, would feature AI dubbing.
For 50 years, Alain Dorval was the familiar voice of Stallone in French-dubbed films, but he passed away in February 2024. Minister Bergé happens to be Dorval’s daughter. Enter ElevenLabs, which in January 2024 reached a USD 3bn valuation, and found itself at the center of a weeks-long controversy over the cloning of Dorval’s voice.
Bergé publicly opposed (article in French) the use of her father’s digitally recreated voice, despite acknowledging a prior agreement to a test. “It was just a trial run, with an agreement strictly guaranteeing that my mother and I would have final approval before any use or publication. And that nothing could be done without our consent.”
According to Variety, which has followed the story since the partnership around “Armor” between Lumiere Ventures and ElevenLabs came to light, Bergé’s move galvanized the French actors’ guild (FIA, in French).
FIA’s representative, Jimmy Shuman, called the voice cloning attempt a “provocation” in the Variety article. That is because the union is in the midst of “negotiating agreements on limits for artificial intelligence and dubbing.”
The controversy over Stallone’s French voice underscores the potential for AI to displace voice actors, often celebrities in their own right across Europe.
ElevenLabs CEO, Mati Staniszewski, told Variety that “Recreating Alain Dorval’s voice is a chance to show how technology can honor tradition while creating new possibilities in film production.”
Like their US counterparts after a few notable actions, voice-over artists in several European countries are taking a proactive stance through their unions, including AI clauses in their contracts to restrict AI voice use to specific projects or outright banning work for studios that do not offer adequate protections.
Per the latest Variety article on the subject, voice actor Michel Vigné will be the voice of Stallone for the French release. According to IMDB, Vigné has already voiced Stallone in French in the past.
The larger issue remains: the film industry acknowledges that AI voice cloning technology is rapidly advancing and the drama around Armor’s French dubs serves as a symbol of things to come in Europe and beyond.
One decision that perhaps many voice actors will need to grapple with is whether they want their voice to be immortalized with AI or simply be replaced by it or by another actor.
Bryan Forrester, Co-founder and CEO of Boostlingo, returns to SlatorPod for round 2 to talk about the company’s growth, the US interpreting market, and the evolving role of AI.
Bryan explains howBoostlingobalances innovation with practicality, ensuring that new features align with customer needs. He highlights the company’s three-pronged strategy: retaining existing customers, enabling growth, and making long-term bets on emerging trends.
While tools like real-time captions and transcription enhance efficiency, Bryan stresses that AI alone cannot replace human interpreters in complex industries like healthcare. He highlights privacy, compliance, and the nuanced expertise of human interpreters as critical factors, positioning AI as a supportive tool rather than a replacement.
https://youtu.be/fMNcJ5EV2zk
Bryan discusses market dynamics and regulatory changes, including how those under the new US administration could influence language access demand, particularly in areas like healthcare and public services.
He describes Boostlingo’s strategy of leveraging third-party AI models, optimizing them with proprietary data, and rigorously testing to ensure quality and reliability. Looking ahead, Boostlingo plans to expand internationally and integrate AI ethically and effectively into its offerings, guided by its newly formedAI Advisory Board.
Slator's Pro Guide: AI in Interpreting is an
absolute must-read for all providers of interpreting services and solutions.
Here, the authors give a quick snapshot of what the newest applications of AI
and large language models (LLMs) look like in interpreting.
This Slator Pro Guide will bring you up to speed on what
value AI can bring to your company and the new interpreting workflows, service
models, and speech AI capabilities now available to you.
The guide covers 10 one-page, actionable case
studies-thematically designed and presented as vibrant infographics drawn from
research and interviews with some of the leading interpreting service providers
in the industry.
The ten use cases highlight new areas of growth,
innovative models for service delivery, and novel workflows in interpretation
made possible by the recent developments in LLMs, speech-to-text, and speech
synthesis.
We will illustrate how AI speech translation
solutions are being leveraged to open up language access across corporate,
government, and healthcare settings, cutting across a wide variety of settings
and service delivery models.
The guide also discusses AI as an interpreter tool
and co-pilot, as well as its capability to optimize operations and extract
insights from interpreted interactions.
Each use case describes the underlying concept and
practical implications. An adoption and value-add score is also provided to
reflect the industry's current level of uptake for the application as well as
the additional value that it delivers to end clients.
We explain how the technology works and offer a
brief list of leading AI solution providers currently on the market.
We expand on the new opportunities and benefits
that each use case presents for interpreting stakeholders and carry out an
impact analysis for the interpreting sector.
We also identify key risks and limitations to watch
out for, which need to be considered in the adoption process.
The guide provides a higher-level overview of the
key and impactful applications that can serve as a launching pad for
stakeholders to make strategic decisions about adopting AI in interpreting
technology and service models.
This Pro Guide is a must-read and time-saving
briefing on how AI is revolutionizing the interpretation landscape.
On 7 January 2025, LinkedIn News UK released its "Job trends 2025: The 25 fastest-growing jobs in the UK", and the interpretersfindthemselvesat #22. LinkedIn calls this"Jobs on the Rise"-positions that it considers tobepointers of areas of career opportunity based on data collected over the past three years.Inthe list, it names both spoken and sign language interpretersand states the skillstypicalfor a professional in the field as: interpreting, translation, and consecutive interpretation.The professionalsare thusmainly in demand in translation and localization, museums, historical sites, zoos, and interestingly enough, transportation equipment manufacturing.
The LinkedIn data points to London, Manchester, and Glasgow as the top UK locations where the hiring of interpreters istakingplace. The average experience required is 2.2 years, while most interpreters work remotely at 73%, or in a hybrid positionat 8%. The rest isassumed to work on-site; however, thefigure is not included in this list.
Most interpreters in the UK and othercountrieswork as public service interpreters, with a minority working as conference interpreters. Public service interpreters work at public institutions, suchas the National Health Service (NHS), the Courts and Tribunals System, and Border Force and Immigration Enforcement.
Interpreting is one of the UK government's regulated professions. "Interpreter"isincluded in the government list under "Chartered Linguist," which is a generalterm for variousprofessions relatedto languages recognized by the bodies that subscribeto the standards of the Chartered Institute of Linguists (CIOL), suchas the National Register of Public Service Interpreters (NRPSI).
Contrastto the upbeat LinkedIn ranking of interpreting as an area of opportunity is the at times contentious environment in the UK's public service interpreting sector, especially over the past two years. In that timeframe, for instance, the NRPSI has sent multiple official communications to the Ministry of Justice regarding its policies as interpreters continue to protest work conditions and pay schedules in several cities.
Slator has also covered the development of UK public services interpreting over the same period in several articles, suchas the review of court interpreter qualifications, new credentials, legal probes into current contracts and future tenders, whopays for interpreter services, and discussions on the use of Al or lack thereof during government sessions.