Benchmark Tool
Compare Agents and LLMs
In the Compare Agents and LLMs screen, you can compare responses from different language models (LLMs) and agents. By simply entering a System Prompt to set the behavior of the agent and a User Prompt to ask a specific question, you can generate responses across multiple models. This tool allows you to quickly evaluate how various models handle the same input, helping you identify the most appropriate model for your specific use case. It provides insights into response styles, accuracy, and how each model interprets the instructions given in the prompts.

Create / Edit Benchmarks
Create questions to run against your agents.

Run Benchmarks
Select question from the dropdown, then models you want to test, and agents to run the benchmark.

My Benchmarks Results
Review the results:

We’d love to hear from you! Reach out to [email protected]
Last updated