Abstract
EVA-Bench presents a comprehensive evaluation framework for voice agents that simulates realistic conversations and measures performance across multiple voice-specific failure modes using novel accuracy and experience metrics.
Voice agents, artificial intelligence systems that conduct spoken conversations to complete tasks, are increasingly deployed across enterprise applications. However, no existing benchmark jointly addresses two core evaluation challenges: generating realistic simulated conversations, and measuring quality across the full scope of voice-specific failure modes. We present EVA-Bench, an end-to-end evaluation framework that addresses both. On the simulation side, EVA-Bench orchestrates bot-to-bot audio conversations over dynamic multi-turn dialogues, with automatic simulation validation that detects user simulator error and appropriately regenerates conversations before scoring. On the measurement side, EVA-Bench introduces two composite metrics: EVA-A (Accuracy), capturing task completion, faithfulness, and audio-level speech fidelity; and EVA-X (Experience), capturing conversation progression, spoken conciseness, and turn-taking timing. Both metrics apply to different agent architectures, enabling direct cross-architecture comparison. EVA-Bench includes 213 scenarios across three enterprise domains, a controlled perturbation suite for accent and noise robustness, and pass@1, pass@k, pass^k measurements that distinguish peak from reliable capability. Across 12 systems spanning all three architectures, we find: (1) no system simultaneously exceeds 0.5 on both EVA-A pass@1 and EVA-X pass@1; (2) peak and reliable performance diverge substantially (median pass@k - pass^k gap of 0.44 on EVA-A); and (3) accent and noise perturbations expose substantial robustness gaps, with effects varying across architectures, systems, and metrics (mean up to 0.314). We release the full framework, evaluation suite, and benchmark data under an open-source license.
Community
How do you know a voice agent is good? Task completion isn't enough. A voice agent can call the correct tools and still misread a confirmation code, fabricate a policy detail, or respond so slowly a caller hangs up. Catching those failures requires evaluation that goes beyond transcripts โ and beyond a single domain or acoustic condition.
Today, we're releasing ๐๐ฉ๐-๐๐ฒ๐ป๐ฐ๐ต โ designed to surface exactly that.
๐ข ๐ง๐ต๐ฟ๐ฒ๐ฒ ๐ฒ๐ป๐๐ฒ๐ฟ๐ฝ๐ฟ๐ถ๐๐ฒ ๐ฑ๐ผ๐บ๐ฎ๐ถ๐ป๐. We've scaled from a single dataset to three: ๐๐ฅ, ๐๐ง๐ฆ๐ , and ๐๐ฆ๐ . Because the best voice agent for customer service isn't necessarily the best one for HR or IT support.
If you prefer the video/audio modality, please checkout the podcast about this work: https://www.youtube.com/watch?v=x7Ks932T18o
Get this paper in your agent:
hf papers read 2605.13841 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper