Comparing Speech-to-text Models

On this page we discuss the technical details of the speech-to-text models that we use for the Transcribe service to help users choose what model to use for their use cases.

Overview

The CCV AI Transcribe service uses state-of-the-art speech-to-text and voice activity detection (VAD) models to provide high-quality and fast transcriptions. Currently, we offer the proprietary Google Gemini model and the open-source OpenAI Whisper and Qwen3-ASR models. We are continually adding high-performance transcription models as they become available.

Below is a quick comparison between the models documented on this page. We also include Cohere Transcribe as an external reference point. Please continue reading for more technical details.

Model
Google Gemini
OpenAI Whisper
Qwen3-ASR
Cohere Transcribe

Word error rate (WER)* (Lower is better)

Unpublished from data source

7.44%

5.76%

5.42%

Diarization quality

Best

Better

Better

Better

Open source

Runs on Brown-managed infrastructure

Supports word-level timestamps

Captions/Subtitles

Enhanced SRT support with better readability

Enhanced SRT support with better readability

Enhanced SRT support with better readability

Speed

<5 min/audio hr, multiple audio files uploaded in the same job are transcribed simultaneously

< 5 min/audio hr

< 5 min/audio hr, 1.2~1.5 times faster than Whisper

< 5 min/audio hr

Recommendation

Better for short audio files and best-in-class transcription and diarization results

Better for longer audio files. Best for use cases that require word-level timestamps and better subtitles

Better for longer audio files. Best for use cases that require word-level timestamps and better subtitles, and also: Noisy environments, singing voices, and Chinese/Cantonese dialects

Good for enterprise transcription of noisy audio, accented speech, customer calls, meetings, and specialized vocabulary

* WER changes based on the dataset the test is performed on. Data comes from the Open ASR Leaderboardarrow-up-right.

Languages supported

The table below compares language support across the models listed on this page. For now, all listed languages are marked as supported for all models.

Language
Google Gemini
OpenAI Whisper
Qwen3-ASR
Cohere Transcribe

🌐 English

🇸🇦 Arabic

🇨🇳 Chinese (Mandarin)

🇳🇱 Dutch

🇪🇸 Spanish

🇮🇹 Italian

🇩🇪 German

🇷🇺 Russian

🇵🇹 Portuguese

🇯🇵 Japanese

🇰🇷 Korean

🇫🇷 French

🇻🇳 Vietnamese

🇮🇳 Hindi

🇮🇩 Indonesian

The Google Gemini model

Gemini- is the flagship multimodal large language model by Google that offers state-of-the-art performance for audio transcription.

circle-exclamation

At the moment, the Gemini model offers best-in-class performance in terms of diarization accuracy, and transcription speed, and it is affordable enough that we can offer the use of Gemini for free. Therefore, it is our recommended model to try out when you use Transcribe.

The OpenAI Whisper model

circle-info

Under the hood, we use the WhisperXarrow-up-right implementation to handle transcription tasks. The WhisperX implementation performs voice activity detection (VAD) first on the audio file and chunks the audio into smaller speech segments before sending the segments to the Whipser model. In our experience, WhisperX significantly reduces model hallucination and improves accuracy on most transcriptions tasks.

The OpenAI Whisperarrow-up-right is the most popular and robust open-source speech-to-text model first released by OpenAI in late 2022. Since release, it has been one of the top open-source models for automated speech recognition (ASR) tasksarrow-up-right. The Whisper-large-v3model that the Transcribe service uses was released in September 2023.

All transcription jobs using the OpenAI Whisper model are run on GPU in a Google Cloud Run container. Therefore, no calls to a third-party API happens in this process, so that users are assured data does not leave Brown-managed infrastructure.

When the OpenAI Whisper model is selected, speaker diarization (recognizing and tracking different speakers) is performed by another open-source model specializing in speaker diarization, pyannote.audioarrow-up-right. As OpenAI Whisper does audio transcription only and does not support speaker diarization, both models are run together over Brown-managed services. Although one of the best open-source speaker diarization models available, pyannote.audio still trails behind commercial alternatives. Therefore, if the accuracy of speaker diarization is a priority and/or the audio includes many speakers talking over each other, please choose the Microsoft Azure model for better performance in those tasks.

The Qwen3-ASR model

The Qwen3-ASR model familyarrow-up-right is the new state-of-the-art open source speech-to-text model in early 2026. This model consistently beats other commercial and open-source ASR models on almost all metrics. It especially excels in the following speech-to-text tasks:

  • English accents and dialects from 16 countries

  • Challenging acoustic/linguistic scenarios: It remains stable and produces reliable outputs under challenging conditions such as elderly/child speech, extremely low SNR, maintaining very low character/word error rates.

  • Singing voice recognition: Supports full-song transcription (Chinese/English) with background music (BGM)

  • Chinese, Cantonese, and 20+ regional dialects

Like the jobs using OpenAI Whisper model, all transcription jobs using the Qwen3-ASR model are on GPU in a Google Cloud Run container without interchanging data to a third-party API. The speaker diarization performance is the same as the Whisper Model because it is also done with pyannote.audio.

The Cohere Transcribe model

circle-info

Cohere Transcribe is included here for comparison. It is not currently offered in CCV AI Transcribe.

Cohere Transcribearrow-up-right is Cohere's speech-to-text offering for enterprise transcription workflows. Cohere positions it for real-world audio such as customer calls, meetings, and other recordings where background noise, accents, and specialized terminology can reduce accuracy.

Like the jobs using OpenAI Whisper model, all transcription jobs using the Cohere Transcribe model are on GPU in a Google Cloud Run container without interchanging data to a third-party API. The speaker diarization performance is the same as the Whisper Model because it is also done with pyannote.audio.

Last updated

Was this helpful?