Evaluation Errors
Error categories, retryable vs non-retryable, and how to retry.
When an evaluator or generator model fails, PeerLM categorizes the error to help you decide what to do.
Error Categories
| Category | Description | Retryable? |
|---|---|---|
| Provider Error | Temporary service issue on the provider side | Yes |
| Rate Limited | Too many requests to the provider | Yes (wait) |
| Timeout | Provider took too long to respond | Yes |
| Parse Error | Response couldn't be parsed as expected JSON | Yes |
| Auth Error | Invalid API key for the provider | No |
| Model Unavailable | Model has been deprecated or is offline | No |
| Context Too Long | Prompt exceeds the model's max context | No (shorten prompt) |
| Content Filtered | Blocked by the model's safety filter | No (rephrase) |
| Model Refusal | Model interpreted the prompt as manipulation | No (rephrase) |
Retrying Evaluators
If a single evaluator model failed (e.g., timed out), you can retry just that evaluator from the run detail page. This re-runs only that evaluator's scoring and re-aggregates the results. It's faster and cheaper than retrying the entire run.
Viewing Error Details
Error details are stored in the evaluation metadata. Expand the evaluation scores section on the run detail page to see error categories and messages for any failed evaluations.