Monday, June 9, 2025

What Are Language Solutions Integrators and Language Technology Platforms?

 

SlatorPod #252 - What Are LSIs and LTPs?Florian and Esther welcome Slator’s Anna Wyndham and Alex Edwards to SlatorPod to explain the rationale behind the new industry framework introduced in the Slator 2025 Language Industry Market Report.

Drawing from the flagship report and echoing the buzz of SlatorCon London, the team explains why the traditional labels, Language Service Providers (LSPs) and Translation Management Systems (TMSs), no longer capture the scope and complexity of the evolving market. Instead, Slator has introduced two new terms: Language Solutions Integrators (LSIs) and Language Technology Platforms (LTPs).

Anna defines LTPs as pure-play technology providers that develop language tools, applications, orchestration platforms, and AI models. LSIs, she explains, are organizations whose core offering is to deliver fit-for-purpose multilingual content solutions by integrating language technology and AI with human experts as part of a fully managed solution.

Esther confirms early advisory adoption of the terms, noting investor interest in clearer tech-service distinctions. Alex adds that automatic dubbing startups tend to fit LTPs better than LSPs, as they often operate self-serve AI platforms.

Subscribe on YouTubeApple PodcastsSpotify, and elsewhere

Anna clarifies that big tech players like OpenAI and Google are excluded from the market sizing as they are foundational enablers, not language-focused businesses. The team also discusses why the term “AI” was excluded from the new categories as it may become as ubiquitous as “Cloud”.

To close, Anna points out that LSIs currently capture the bigger portion of the total addressable market (TAM). The team sees a strong demand for expert-in-the-loop services and growing LTP–LSI partnerships.

Thursday, June 5, 2025

Grammarly Raises a Billion in Financing Tied to Revenue Growth


Grammarly raises $1 billion for AI platform transition

Known as much for its flagship write-assist offering as for ubiquitous advertising, the Grammarly Editor platform raised USD 1bn from General Catalyst‘s Customer Value Fund. The funds will operate like a loan or credit line with capped returns tied to revenue, rather than an equity stake.

The financing, announced on X on May 29, 2025, by the company’s current CEO, Shishir Mehrotra, is structured in a way that allows General Catalyst to increase their investment in Grammarly without diluting ownership by issuing new shares to other investors — according to Reuters.

The Reuters article also said the capital is intended to support and accelerate Grammarly’s growth through increased spending on sales, marketing, and strategic acquisitions. The investment is also expected to allow 16-year-old Grammarly to reallocate funds towards product development, particularly expanding its AI-driven offerings with communication-centric tools and integrated external applications, Reuters reported.

Mehrotra, who founded the AI collaborative platform Coda, said in an interview with Reuters that “Grammarly is going through a huge transformation… from being what is mostly known as a single-purpose agent to being an agent platform.” Coda completed a merger with Grammarly in January 2025.

“Grammarly is going through a huge transformation… from being what is mostly known as a single-purpose agent to being an agent platform.” — Shishir Mehrotra, CEO of Grammarly

This is not General Catalyst’s first show of faith in Grammarly. The firm led a USD 90m investment round in 2019 and likely propelled the Coda acquisition by being an investor in both companies.

Reflecting on the deal, podcasters from “This Week in Startups” (TWIST) remarked that “SaaS is a difficult business and maybe [General Catalyst] thought ‘hey we could put these two companies together [and] have a stronger company.’ This will help increase the value of the company… so if they have some LPs [Limited Partners] who want to get a return on capital you get the double benefit — you’re loaning them money instead of having them take the money from a bank, so it’s all in the family.”

As a SaaS company, Grammarly faces potential headwinds, according to the TWIST podcast. The impact of AI tools, for example, could lead to a need for SaaS companies to re-evaluate pricing models, potentially moving away from per-seat to a consumption model, which historically has not been popular.

As of Q2 2025, Grammarly is a profitable at-scale business with an annualized revenue of more than USD 700m, supported by an individual user base estimated at 40 million and over 50,000 Grammarly Business accounts.

Mehrotra does not discount a future IPO: “I’m right now just focused on making sure we’re innovating with new products, growing as fast as we can. But when we feel ready, we’ll go public,” he told Reuters.

The investment underscores strong confidence in the company’s proven AI expertise and deep understanding of language, particularly given General Catalyst’s track record of backing companies with reliable revenue models (including the Viva Translate platform). 

According to PitchBook, Grammarly was valued at USD 13bn as far back as 2021.

Friday, May 30, 2025

EU Postpones Spain Language Decision Again

 

EU Again Postpones Spain Language Decision

Almost two years after the first formal petition, Spain once again pushed for its three co-official languages, Catalan, Basque, and Galician, to be added as official languages within the European Union (EU). On May 27, 2025, the EU postponed any decision on the matter for a second time.

The first time the proposal was rejected was in September 2023, a little over a month after the Spanish Minister of Foreign Affairs, José Manuel Albares, requested for the Council of the European Union to include the languages. 

At the time, Spain presided over the EU government, which communicated that it had discussed the matter but needed more information, and deferred making a decision on whether to bring it to a vote.

This time, with Poland in the presidency, the EU has once again postponed any decision on the matter after at least ten countries, including Finland, Italy, and Germany, threatened to reject the proposal if it was brought to a vote. Sweden and other countries had opposed the 2023 proposal. Furthermore, a change in language policy at the EU requires a unanimous vote by all 27 EU State Members.

Initially, Albares proposed rolling out Catalan first and told Member States that Spain was willing to cover the costs of bringing all three languages into the EU (i.e., projected document translation costs).

A key piece of Spain’s Prime Minister Pedro Sánchez’s political campaign to remain in power was his commitment to the Junts Catalan party in 2023 to continue pressing the EU on co-official language inclusion in exchange for support at the polls.

According to the Spanish newspaper “El Diario,” Spain’s government spokesperson Pilar Alegría stated that Spain accepts the request to continue dialoguing and acknowledged that the proposal lacks enough support for the initiative to go forward.


Wednesday, May 28, 2025

What is a Language Solutions Integrator?

 

Language Solutions Integrator

The term “Language Solutions Integrator” (LSI) was first introduced in the Slator 2025 Language Industry Market Report and is a term used to describe organizations whose core offering is to deliver fit-for-purpose multilingual content solutions by integrating language technology and AI with human experts as part of a fully managed solution.

While LSIs use technology — including AI — as part of the multilingual content production process to gain efficiency and reduce overall costs, they are ultimately responsible for the final outcome of multilingual content, as their value proposition includes the involvement of expert linguists and quality specialists.

These experts-in-the-loop (EITL) are typically deployed and managed directly by the LSI to ensure that outcomes meet the specific requirements of each buyer.

Examples of LSIs include TransPerfectLanguageWireRWSLilt, or Boostlingo — to name just a few among the thousands of LSIs operating globally.

A potential misconception is that LSIs do not own and operate any proprietary technology. However, this is not the case — LSIs monetize human-managed outcomes, even when AI-enabled. In fact, many LSIs own market-leading technology and may not even be tech-agnostic.

Smartling, for example, is an AI-first company with an all-in-one language AI platform available to buyers of localization services and other LSIs. However, Smartling’s EITL services enable the company to deliver fully managed, fit-for-purpose multilingual content solutions, meaning that it is also a LSI.

In short, LSIs can work with any number of service providers and technology solutions with a core value proposition to deliver human-verified multilingual language content.

In addition, LSIs may choose to integrate or partner with Language Technology Platforms (LTPs), which are covered here.

Slator introducing the new language industry market report at SlatorCon
Slator Launches New 2025 Market Report at SlatorCon London

From a revenue perspective, LSIs typically operate on a project- or SLA-basis and price their solutions based on volume. LSIs that own proprietary technology may complement this with revenue generated based on a SaaS subscription model, whose primary metric is Annual Recurring Revenue (ARR).

Thursday, May 22, 2025

IIT Bombay Explores Accent-Aware Speech Translation

 

IIT Bombay Translation

In a May 4, 2025 paper, researchers at IIT Bombay introduced a new approach to speech-to-speech translation (S2ST) that not only translates speech into another language but also adapts the speaker’s accent.

This work aligns with growing industry interest in accent adaptation technologies. For example, Sanas, a California-based startup, has built a real-time AI accent modification tool that lets users change their accent without changing their voice. Similarly, Krisp offers AI Accent Conversion technology that neutralizes accents in real time, improving clarity in customer support and business settings.

While Sanas and Krisp focus on accent adaptation alone, the IIT Bombay researchers explore how accent and language translation can be combined in a single model.

“To establish effective communication, one must not only translate the language, but also adapt the accent,” the researchers noted. “Thus, our problem is to model an optimal model which can both translate and change the accent from a source speech to a target speech,” they added.

Scalable and Expressive Cross-Lingual Communication

To do this, they proposed a method based on diffusion models, a type of generative AI typically associated with image generation — DALL-E 2, which creates realistic images based on the user’s text input, is an example of diffusion models — but their applications extend to other domains, including audio generation.

They implemented a three-step pipeline. First, an automatic speech recognition (ASR) system converts the input speech into text. Then, an AI translation model translates the text into the target language. Finally, a diffusion-based text-to-speech model generates speech in the target language with the target accent.

So, the core innovation lies in the third step, where the researchers used a diffusion model for speech synthesis. In this case, instead of creating images, the model generates mel-spectrograms (i.e., visual representations of sound) based on the translated text and target accent features, which are then turned into audio. For this, the researchers used GradTTS, a diffusion-based text-to-speech model, as the foundation of their system.

They tested their model on English and Hindi, evaluating its ability to generate speech that reflects both the correct translation and target accent. “Experimental results […] validate the effectiveness of our approach, highlighting its potential for scalable and expressive cross-lingual communication,” they said.

The researchers acknowledged several limitations, but they still see this as a promising starting point. “This work sets the stage for further exploration into unified, diffusion-based speech generation frameworks for real-world multilingual applications,” they concluded.

Authors: Abhishek MishraRitesh Sur ChowdhuryVartul BahugunaIsha Pandey, and Ganesh Ramakrishnan

Wednesday, May 21, 2025

AI Tech Consulting Firm Quansight Acquires Cobalt Speech and Language


AI Tech Consulting Firm Quansight Acquires Cobalt Speech and Language

On May 6, 2025, open source technology consulting firm Quansight announced that it had acquired Cobalt Speech and Language, a provider of automatic speech recognition (ASR), transcription, natural language understanding, and other voice technologies in multiple languages. The deal closed April 10, 2025. 

According to Quansight CEO Travis Oliphant, the purchase was for cash, earn-out, and equity in Quansight portfolio companies. 

“Quansight builds AI systems and has key developers who know how to build the tools behind AI (PyTorchJAXTensorflow, and NumPy),” Oliphant told Slator. “Cobalt builds language systems that use these tools.”

Oliphant said that Quansight decided to acquire Cobalt, rather than build its own speech technologies in-house, based on the strength of Cobalt’s team, which could help maintain a certain speed of development. Of course, he acknowledged that the prospect of acquiring Cobalt’s customers was also attractive.

Massachusetts-based Cobalt was founded in 2014 by CEO Jeff Adams, known as the “father of Alexa” for his work on Amazon Echo. A press release on the acquisition quoted Adams as saying that Cobalt has “always focused on delivering highly customized speech and language tools that work in the real world, not just in the lab.”

Oliphant told Slator that Cobalt’s approximately 15 employees will join Quansight’s team of 80.

Cobalt currently offers several voice-enabled technologies, including Cobalt Transcribe for speech recognition and transcription. Its end-to-end speech recognition engines are powered by deep neural networks (DNNs), and clients can choose from two different DNN models based on their needs.

Hybrid models use separately tunable acoustic models, lexicons, and language models for maximum flexibility and customization for various use cases.

End-to-end models, meanwhile, directly convert sounds to words within the same DNN. This version works for general use and tends to produce more accurate transcriptions (based on word error rates) than the hybrid models.

Speech recognition is available in English, Spanish, French, German, Russian, Brazilian Portuguese, Korean, Japanese, Swahili, and Cambodian, though Cobalt is “always looking for partners to develop, sell, and/or market speech technology in other languages,” according to Cobalt Transcribe FAQs.

Other services include Cobalt Speech Intelligence, which analyzes audio to glean demographic information about speakers, such as age, gender, and regional accent, plus emotion.

Investments and Intersections

As a consulting firm, Quansight specializes in solving data-related problems with open-source software and services, including AI, data and machine learning engineering, RAG, and large language models (LLMs), among others. 

Quansight, founded in 2018, has previously invested in pre-seed rounds for two other companies: Savimbo, a certifier of fair-trade carbon, biodiversity, and water credits; and Mod Tech Labs, an AI platform for 3D content creation. 

Quansight Initiate, an early-stage VC firm also headed by Oliphant, has invested in five open source tech startups since its 2019 founding.

“Quansight recently completed a restructuring of subsidiary companies,” Oliphant explained to Slator. “Going forward, M&A activities will focus on OpenTeams (for AI growth), OS BIG, Inc. dba OpenTeams Incubator (investment and M&A), Cobalt Speech and Language (speech and language technology and services), and Quansight, PBC to continue with the community-driven open-source aspect of its business.”

“All of our companies now have either existing or prospective intersections with the language industry,” he added.

Monday, April 7, 2025

Language Discordance Raises Risk of Hospital Readmissions, U.S. Study Finds

 A June 2024 meta-analysis published in BMJ Quality & Safety was recently brought back into the spotlight by Dr. Lucy Shi, who discussed its findings in an article for The Hospitalist. The study, conducted by Chu et al., examined the link between language discordance and unplanned hospital or emergency department (ED) readmissions.

US Study Finds that Language Discordance Increases Risk of Hospital Readmissions

The researchers also evaluated whether interpretation services could help reduce disparities in these outcomes between patients who speak a non-dominant language and those who do not. Their analysis was based on a literature search of PubMed, Embase, and Google Scholar, initially conducted on January 21, 2021, and updated on October 27, 2022.

Extensive research has shown that patients and families with non-dominant language preferences often face challenges in communication, understanding medical information, and accessing care. Language discordance can contribute to adverse events and poorer outcomes during critical care transitions, such as hospital discharge.

The authors of the paper note that previous research on the effects of language discordance on hospital readmissions and emergency department (ED) revisits has produced mixed results — differences they partially attribute to variations in study criteria and methodologies.

The studies included in the meta-analysis were primarily conducted in Switzerland and English-speaking countries such as the the USAustralia, and Canada. These studies reported data on patient or parental language skills or preferences and measured outcomes such as unplanned hospital readmissions or ED revisits.

To maintain consistency, the authors excluded non-English studies, those lacking primary data, and studies that did not stratify patient outcomes by language preference or use of interpretation services. Ultimately, the analysis included data from 18 adult studies focused on 28- or 30-day hospital readmissions, seven adult studies on 30-day ED revisits, and five pediatric studies examining 72-hour or seven-day ED revisits.

Findings
The meta-analysis revealed that adult patients with language discordance had higher odds of hospital readmission. Specifically, the data showed a statistically significant increase in 28- or 30-day readmission rates for adults with a non-dominant language preference (OR 1.11; 95% CI: 1.04 to 1.18).

Importantly, the impact of interpretation services was notable. In the four studies that confirmed the use of interpretation services during patient-clinician interactions, there was no significant difference in readmission rates. In contrast, studies that did not specify whether interpretation services were provided showed higher odds of readmission for language-discordant patients.

Adult patients with a non-dominant language preference also faced higher odds of emergency department (ED) readmission compared to those who spoke the dominant language. Specifically, the meta-analysis found a statistically significant increase in unplanned ED visits within 30 days among language-discordant adults.

However, this trend was not observed in studies where the use of interpretation services was verified. The authors concluded that “providing interpretation services may mitigate the impact of language discordance and reduce hospital readmissions among adult patients.”

For pediatric patients, the analysis indicated that children whose parents were language-discordant with providers had higher odds of ED readmission within 72 hours and seven days, compared to children whose parents spoke the dominant language fluently.

That said, the authors noted that a meta-analysis for pediatric hospital readmissions was not conducted due to the limited number of studies and inconsistencies in study design. The individual pediatric studies reviewed did not yield statistically significant results.

The study highlights key limitations in the current evidence base — particularly regarding pediatric readmissions and the effectiveness of language access interventions on clinical outcomes. Variability in how language discordance is defined and measured across studies was also identified as a limitation.

The authors recommend developing a more standardized approach to identifying patients facing language-related barriers to care and determining whose language preferences — whether the patient’s or a parent’s — are most influential in shaping clinical outcomes.

What Are Language Solutions Integrators and Language Technology Platforms?

  Florian and Esther welcome Slator’s Anna Wyndham and Alex Edwards to SlatorPod to explain the rationale behind the new industry framework ...