PeerLMPeerLM
Evaluation Report

Run Your Own LLM Evaluation

Test any models with your prompts. Start free, no credit card required.

Start Free
LLM Evaluation
comparative Evaluation

Coding Performance with 10 Evaluators — Run

Comprehensive evaluation of 2 language models across 1 system prompt with rigorous benchmarking and scoring criteria.

Top Score

8.95

claude-opus-4.6

Average Score

5.00

Spread: 7.90 pts

Avg Latency

Response time

Evaluations

80

8 total responses

Executive Insights

Key takeaways from this evaluation

Top Performer

claude-opus-4.6

8.95

7.90 pts ahead of #2

Model Rankings

Ranked by overall performance score

1

claude-opus-4.6

Winner

anthropic/claude-opus-4.6

Performance Score
8.95/ 10

Very Good

Responses

4

Avg Latency

Cost

$0.0408

2

grok-4

x-ai/grok-4

Performance Score
1.05/ 10

Needs Improvement

Responses

4

Avg Latency

Cost

$0.0925

Evaluator Consensus

How 10 evaluator models ranked the candidates via blind comparison

unanimous Agreement

All 10 evaluators agree on the top model

1

claude-opus-4.6

Unanimous Winner

Avg Rank

1.0

Range

#1

#1 Votes

10/10

Latency

2

grok-4

Avg Rank

2.0

Range

#2

#1 Votes

0/10

Latency

Per-Evaluator Rankings
How each evaluator model individually ranked the candidates

gpt-5.4-mini

8 evals
1
claude-opus-4.610.00
2
grok-40.00

gemini-3.1-flash-lite-preview

8 evals
1
claude-opus-4.67.50
2
grok-42.50

claude-sonnet-4.6

8 evals
1
claude-opus-4.610.00
2
grok-40.00

minimax-m2.7

8 evals
1
claude-opus-4.610.00
2
grok-40.00

kimi-k2.5

8 evals
1
claude-opus-4.67.50
2
grok-42.50

deepseek-v3.2

8 evals
1
claude-opus-4.610.00
2
grok-40.00

grok-4.1-fast

8 evals
1
claude-opus-4.67.50
2
grok-42.50

mistral-small-2603

8 evals
1
claude-opus-4.610.00
2
grok-40.00

qwen3.5-27b

8 evals
1
claude-opus-4.610.00
2
grok-40.00

nova-2-lite-v1

4 evals
1
claude-opus-4.65.00
2
grok-45.00

Score Comparison

Visual comparison of all model scores

Run Your Own Model Comparison

  • Compare any LLM across custom criteria and prompts
  • Automated scoring with AI evaluators
  • Share results and track model performance over time

Performance by System Prompt

How each model performs across different evaluation contexts

Coding Agent
8 responses • avg score 5.00

Top Performer

claude-opus-4.6

8.95

1
claude-opus-4.6
8.95
2
grok-4
1.05

Performance by Test Prompt

Model results broken down by individual test prompts

Test PromptAvg Score

Javascript Function

2 responses

5.00

Write an Interval Merge Function

2 responses

5.00

Debug Python

2 responses

5.00

Refactor Javascript

2 responses

5.00

About This Evaluation

Methodology, criteria weights, and evaluation confidence

Evaluation Criteria
Method:
comparative
Accuracy50%
Instruction Following50%

8

Total Responses

80

Total Evaluations

Ready to evaluate your models?

Compare LLMs with custom criteria, automated scoring, and shareable reports.

Free to start • No credit card required

PeerLMPowered by PeerLM
PeerLMThis report was generated by PeerLM
Run Your Own Evaluation