Run Your Own LLM Evaluation
Test any models with your prompts. Start free, no credit card required.
Coding Performance with 10 Evaluators — Run
Comprehensive evaluation of 2 language models across 1 system prompt with rigorous benchmarking and scoring criteria.
5.41
gemini-3.1-pro-preview
5.00
Spread: 0.82 pts
3505ms
Response time
80
8 total responses
Executive Insights
Key takeaways from this evaluation
Top Performer
gemini-3.1-pro-preview
5.41
0.82 pts ahead of #2
Best Value
gpt-5.4
4.59
Best score-to-cost ratio
Model Rankings
Ranked by overall performance score
gemini-3.1-pro-preview
google/gemini-3.1-pro-preview
Average
Responses
4
Avg Latency
3505ms
Cost
$0.0791
gpt-5.4
openai/gpt-5.4
Below Average
Responses
4
Avg Latency
—
Cost
$0.0101
Evaluator Consensus
How 10 evaluator models ranked the candidates via blind comparison
majority Agreement
6 of 10 evaluators agree on the top model
gemini-3.1-pro-preview
Avg Rank
1.4
Range
#1–2
#1 Votes
6/10
Latency
3505ms
gpt-5.4
Avg Rank
1.6
Range
#1–2
#1 Votes
4/10
Latency
—
gpt-5.4-mini
gemini-3.1-flash-lite-preview
claude-sonnet-4.6
minimax-m2.7
deepseek-v3.2
grok-4.1-fast
mistral-small-2603
qwen3.5-27b
kimi-k2.5
nova-2-lite-v1
Score Comparison
Visual comparison of all model scores
Performance by System Prompt
How each model performs across different evaluation contexts
Top Performer
gemini-3.1-pro-preview
5.41
Performance by Test Prompt
Model results broken down by individual test prompts
| Test Prompt | Avg Score |
|---|---|
Javascript Function 2 responses | 5.00 |
Write an Interval Merge Function 2 responses | 5.00 |
Debug Python 2 responses | 5.00 |
Refactor Javascript 2 responses | 5.00 |
About This Evaluation
Methodology, criteria weights, and evaluation confidence
8
Total Responses
80
Total Evaluations