Publication date: 5 April 2026
check for grammar, punctuation, and spelling errors in the text. It also provides,更多细节参见新收录的资料
。业内人士推荐新收录的资料作为进阶阅读
Италия — Серия А|27-й тур
Sarvam 30B supports native tool calling and performs consistently on benchmarks designed to evaluate agentic workflows involving planning, retrieval, and multi-step task execution. On BrowseComp, it achieves 35.5, outperforming several comparable models on web-search-driven tasks. On Tau2 (avg.), it achieves 45.7, indicating reliable performance across extended interactions. SWE-Bench Verified remains challenging across models; Sarvam 30B shows competitive performance within its class. Taken together, these results indicate that the model is well suited for real-world agentic deployments requiring efficient tool use and structured task execution, particularly in production environments where inference efficiency is critical.。业内人士推荐新收录的资料作为进阶阅读