PeerLMPeerLM
Evaluation Report

Run Your Own LLM Evaluation

Test any models with your prompts. Start free, no credit card required.

Start Free
LLM Evaluation
comparative Evaluation

Coding Performance with 10 Evaluators — Run

Comprehensive evaluation of 2 language models across 1 system prompt with rigorous benchmarking and scoring criteria.

Top Score

5.53

gpt-5.4

Average Score

5.00

Spread: 1.06 pts

Avg Latency

500ms

Response time

Evaluations

80

8 total responses

Executive Insights

Key takeaways from this evaluation

Top Performer

gpt-5.4

5.53

1.06 pts ahead of #2

Model Rankings

Ranked by overall performance score

1

gpt-5.4

Winner

openai/gpt-5.4

Performance Score
5.53/ 10

Average

Responses

4

Avg Latency

Cost

$0.0101

2

kimi-k2.5

moonshotai/kimi-k2.5

Performance Score
4.47/ 10

Below Average

Responses

4

Avg Latency

500ms

Cost

$0.0118

Evaluator Consensus

How 10 evaluator models ranked the candidates via blind comparison

unanimous Agreement

All 10 evaluators agree on the top model

1

gpt-5.4

Unanimous Winner

Avg Rank

1.0

Range

#1

#1 Votes

10/10

Latency

2

kimi-k2.5

Avg Rank

2.0

Range

#2

#1 Votes

0/10

Latency

500ms

Per-Evaluator Rankings
How each evaluator model individually ranked the candidates

gpt-5.4-mini

8 evals
1
gpt-5.47.50
2
kimi-k2.52.50

gemini-3.1-flash-lite-preview

8 evals
1
gpt-5.47.50
2
kimi-k2.52.50

claude-sonnet-4.6

8 evals
1
gpt-5.45.00
2
kimi-k2.55.00

minimax-m2.7

8 evals
1
gpt-5.45.00
2
kimi-k2.55.00

kimi-k2.5

8 evals
1
gpt-5.45.00
2
kimi-k2.55.00

deepseek-v3.2

8 evals
1
gpt-5.45.00
2
kimi-k2.55.00

grok-4.1-fast

8 evals
1
gpt-5.45.00
2
kimi-k2.55.00

mistral-small-2603

8 evals
1
gpt-5.45.00
2
kimi-k2.55.00

qwen3.5-27b

8 evals
1
gpt-5.45.00
2
kimi-k2.55.00

nova-2-lite-v1

4 evals
1
gpt-5.45.00
2
kimi-k2.55.00

Score Comparison

Visual comparison of all model scores

Run Your Own Model Comparison

  • Compare any LLM across custom criteria and prompts
  • Automated scoring with AI evaluators
  • Share results and track model performance over time

Performance by System Prompt

How each model performs across different evaluation contexts

Coding Agent
8 responses • avg score 5.00

Top Performer

gpt-5.4

5.53

1
gpt-5.4
5.53
2
kimi-k2.5
4.47

Performance by Test Prompt

Model results broken down by individual test prompts

Test PromptAvg Score

Javascript Function

2 responses

5.00

Write an Interval Merge Function

2 responses

5.00

Debug Python

2 responses

5.00

Refactor Javascript

2 responses

5.00

About This Evaluation

Methodology, criteria weights, and evaluation confidence

Evaluation Criteria
Method:
comparative
Accuracy50%
Instruction Following50%

8

Total Responses

80

Total Evaluations

Ready to evaluate your models?

Compare LLMs with custom criteria, automated scoring, and shareable reports.

Free to start • No credit card required

PeerLMPowered by PeerLM
PeerLMThis report was generated by PeerLM
Run Your Own Evaluation